Similar projects had been done to different degrees. And the user group is unique, the condition varies from kid to kid, and also out of research ethics, we are not allowed to test on our real users. All our research is going to be based on existing ones, and our deep conversation and learnings from parents, and therapists. Moreover, even though we narrow the users to children with autism aged from 3-5, it is the nature of autism that makes us hard to get a solution, as most of them (80%) is non-verbal, and some of them are having sensory disorders, so having a toy that has a digital screen and robot-like features, creates more problems than it solves.
I am the UX/UI designer on the team alongside another industrial designer, two developers, one product owner, and another project manager.
The long term goal for the team is to come up with a proof of concept where robot toy interacts with youtube videos and in this way teaches kids with autism how to identify and express emotions.
Our users are majorly two parts, one side is the children with autism, who are also the target we are designing for. In the meantime, children’s parents are also going to use this product to provide a better companion for their children. And we have been doing research from the very beginning. However, under research ethics, our real users are not even close to suitable where we can do user tests on. All we have at hand, is the research institutions had conducted before, with the insights from therapists, and experts about robots. We are also working closely with parents, to better understand their needs, and take that part into considerations.
It is through our research that we identified children with ausitsm are likely to have different preferences and hard to predict, yet there’are some patterns are promising for us to design upon.
First is the Voice UI system, which is different from any other interface systems, and this VUI system is even more unique as our users are mostly non-verbal. So there is a combination of the digital screen, alongside the voice that triggers screen interactions.
Then is the exterior of the robot that has so many constraints, and we need to think deeper on materials, durations, and safety as the most important benchmark.
Next, we know that those kids are sensitive to visual schedules, and they freak out when things go out of control, and any unpredicted things might drive them to a dangerous stage.
One thing about parents though is the ability to control the robot. So parents would very much like some sort of terminal that gives tasks to the robot toy.
More on the way, we still need more research, to push our boundaries, and to get to more about our users.