1 00:00:00,167 --> 00:00:10,417 ... 2 00:00:12,876 --> 00:00:16,083 The CHArt-Lutin Laboratory is specialized 3 00:00:16,959 --> 00:00:19,709 in everything that has to do with the user experience 4 00:00:20,000 --> 00:00:23,584 and the use of digital technologies, 5 00:00:23,918 --> 00:00:26,918 for example, screens, or TV programs. 6 00:00:27,209 --> 00:00:30,751 And the user experience for purely ergonomic aspects, 7 00:00:31,042 --> 00:00:32,667 namely the acceptability 8 00:00:32,999 --> 00:00:34,542 of some products, 9 00:00:34,876 --> 00:00:37,500 some interfaces, for example, 10 00:00:37,834 --> 00:00:39,751 or their usefulness, 11 00:00:40,042 --> 00:00:43,250 their usability. The added value of the lab 12 00:00:43,584 --> 00:00:46,459 is going to be essentially 13 00:00:46,792 --> 00:00:48,999 about this user experience, 14 00:00:49,292 --> 00:00:51,959 this expertise in the investigation 15 00:00:52,250 --> 00:00:54,626 of their expectations, of their needs. 16 00:00:54,959 --> 00:00:58,125 The goal is also to get designers 17 00:00:58,459 --> 00:00:59,918 to talk to users. 18 00:01:00,209 --> 00:01:03,959 The objectives of the CHArt-Lutin Laboratory are essentially, 19 00:01:04,250 --> 00:01:05,626 on the ROSETTA project, 20 00:01:05,959 --> 00:01:08,709 the evaluation of the quality of the produced modules 21 00:01:09,000 --> 00:01:11,918 of subtitles and LSF. 22 00:01:12,209 --> 00:01:16,459 So we are going to intervene on two great aspects, I'd say. 23 00:01:16,792 --> 00:01:20,959 On the aspects of usefulness, usability, acceptability. 24 00:01:21,250 --> 00:01:24,167 And other rather psychological aspects, 25 00:01:24,500 --> 00:01:28,375 where, in pure cognition, we are going to be interested 26 00:01:28,709 --> 00:01:30,459 in the perception of certain interfaces 27 00:01:30,792 --> 00:01:33,042 or aspects of language 28 00:01:33,375 --> 00:01:36,334 like understanding in a digital reading situation. 29 00:01:36,667 --> 00:01:40,500 These elements articulate quite simply at the level of the project, 30 00:01:40,834 --> 00:01:44,792 because we get to develop methods and tools, 31 00:01:45,083 --> 00:01:49,083 particularly qualitative analysis methods 32 00:01:49,417 --> 00:01:52,709 like brainstorming or focus group 33 00:01:53,000 --> 00:01:56,459 that are set up with one of our partners 34 00:01:56,792 --> 00:02:02,000 on the project, where we get to meet users 35 00:02:02,334 --> 00:02:05,167 on the basis of these methods developed upstream, 36 00:02:05,500 --> 00:02:08,626 to ask them very specific questions. 37 00:02:08,959 --> 00:02:12,375 Our work is going to be to design these methods, 38 00:02:12,709 --> 00:02:16,959 to define these tools upstream and then be able to meet 39 00:02:17,250 --> 00:02:20,542 the users based on the questions that we are asking ourselves 40 00:02:20,876 --> 00:02:23,834 and based on the evaluations that we are supposed to be conducting. 41 00:02:24,125 --> 00:02:27,459 We can also have testing situations, if for example, 42 00:02:27,792 --> 00:02:31,667 consortium partners, as part of the ROSETTA project, 43 00:02:31,999 --> 00:02:36,417 demand information on the level of understanding, 44 00:02:36,751 --> 00:02:41,000 "understandability" of the subtitles generated by the modules 45 00:02:41,334 --> 00:02:44,459 or LSF performed by the avatar. 46 00:02:44,792 --> 00:02:47,250 Then, we will be able to set up cognitive technologies 47 00:02:47,584 --> 00:02:50,834 and experimental situations 48 00:02:51,125 --> 00:02:54,542 with users through a digital tool, trying to track the eye 49 00:02:54,876 --> 00:02:57,626 with eye-tracking modules, for example, 50 00:02:57,959 --> 00:03:01,999 where we are going to be able to see how users 51 00:03:02,292 --> 00:03:06,417 read subtitles on the screen or look for information 52 00:03:06,751 --> 00:03:09,584 on the avatar when it signs. 53 00:03:09,918 --> 00:03:14,918 SYSTRAN