As a blind person or other neurodiverse individual who has accessibility challenges with visual text or images, and uses a "screen reader" to interact with Dataiku Web Sites (Community, Learning, the Product itself). I would like to be able to successfully use all web interfaces provided by Dataiku.
COS
Note
In the United States, there is the Americans with Disabilities Act, that mandates that employers make accommodations for diverse workers.
Screen Reader turns visual images of a web site into spoken words.
It is often important that graphics have alt tags. For example in some places on the community site the Kudo thumb up and down options don't say that the thumbs-up icon is about setting a Kudo.
I have quite of lot of experience in this field from a previous life, but equally it is always valuable to get direct feedback from disabled users. Do you have any in your organisation, by chance, using DSS today?
โ12-01-202004:05 PM
I have quite of lot of experience in this field from a previous life, but equally it is always valuable to get direct feedback from disabled users. Do you have any in your organisation, by chance, using DSS today?
I agree that having a full time user of these tools is very helpful. At this time I do not know a full time data scientist who is blind.
I some times like to use a screen reader this is how I noticed the problems with kudos on the community site.
Then as a disabilities advocate, and a person with some knowledge of adaptive technology, I felt it important to raise the topic.
If you need some assistance testing, Iโd be willing to pitch in as I have time.
However, to your point recruiting a group of neuro and sensory diverse individuals as an advisory team would be very helpful to your efforts.
That said I suspect that we may have a bit of a chicken and the egg problem here. Until DSS is well suited to screen readers it may be difficult to recruit a dedicated advisory team.
In this thread I note that there is a blind R users group.
This looks to be out of the US National Federation of the Blind. I would imaging other groups in the UK and Australia as well.
I would think that when you are ready you may be able to recruit some more testers.
--Tom
โ12-01-202005:00 PM
I agree that having a full time user of these tools is very helpful. At this time I do not know a full time data scientist who is blind.
I some times like to use a screen reader this is how I noticed the problems with kudos on the community site.
Then as a disabilities advocate, and a person with some knowledge of adaptive technology, I felt it important to raise the topic.
If you need some assistance testing, Iโd be willing to pitch in as I have time.
However, to your point recruiting a group of neuro and sensory diverse individuals as an advisory team would be very helpful to your efforts.
That said I suspect that we may have a bit of a chicken and the egg problem here. Until DSS is well suited to screen readers it may be difficult to recruit a dedicated advisory team.
In this thread I note that there is a blind R users group.
There are lots of images that have alt tags. However, they add nothing to the understanding of the page. They are set to exactly what the source is. They do not describe what the image is exactly.
A picture showing an inspector view of an Dataiku academy web page that has as an alt tag exactly the same as the source. That is a relative path to the location of the image. t
Attached is a video in which you can hear what a user of the Apple Macintosh built-in screen reader sounds like when someone using a screen reader tries to listen to the page.
Note how the current alt tags on the images actually gets in the way of understanding the content. Because it breaks the flow.
A better alt tag for this might be:
"line drawing Predicting purchasing patterns, On the left a customer with a market basket of purchased items labeled "Data (customer purchase info), in the Middle a robot representing ML model, the robot is using a set of rules based on "Monthly spending on web site, Products Reviewed, Area of residency, Banks with..., Previous Transactions on web site to make a binary yes/no prediction."
There are lots of images that have alt tags. However, they add nothing to the understanding of the page. They are set to exactly what the source is. They do not describe what the image is exactly.
A picture showing an inspector view of an Dataiku academy web page that has as an alt tag exactly the same as the source. That is a relative path to the location of the image. t
Attached is a video in which you can hear what a user of the Apple Macintosh built-in screen reader sounds like when someone using a screen reader tries to listen to the page.
Note how the current alt tags on the images actually gets in the way of understanding the content. Because it breaks the flow.
A better alt tag for this might be:
"line drawing Predicting purchasing patterns, On the left a customer with a market basket of purchased items labeled "Data (customer purchase info), in the Middle a robot representing ML model, the robot is using a set of rules based on "Monthly spending on web site, Products Reviewed, Area of residency, Banks with..., Previous Transactions on web site to make a binary yes/no prediction."