FDNA’s CEO Dekel Gelbman recently spoke to hundreds of leaders during the Stanford Big Data in Precision Health Summit. The article below is based on the information he presented at the conference.
By now, AI is so commonplace it’s almost embarrassing if a tech company doesn’t use it. But as much as the sci-fi world likes to pretend otherwise, truly strong general AI is still far away; most AI solutions involve machines programmed for very specific tasks. The most complex systems may conduct many of those tasks, but they’re still not cognitively independent. They still need humans for the most complex problem solving, and nowhere is that more evident than in medicine.
While computers can far outpace humans as processing and analyzing big data, the human touch is still essential to assessing the whole and achieving larger goals. Educating consumers about the line between man and machine is a crucial part of corporate responsibility in the AI space.
Take for example Face2Gene, FDNA’s flagship suite of phenotyping applications. At best, it suggests ideas that never would have occurred to a clinician. At worst, it can become a crutch. FDNA uses regular “Name This Syndrome” challenges to gamify the use of our tools and to remind users to question AI. Follow your GPS exactly and you might drive straight into a ditch–the same is true for relying too heavily on machine learning in the clinical diagnosis process.
In addition to educating users and consumers about the limits of AI, companies must consider how they can influence society. Will AI be only a tool for the wealthy, the white, and the west? Or can AI level the playing field?
The “garbage in, garbage out” premise applies to inequality as well. “Disparities in, disparities out,” we could say of AI data inputs and resulting analyses. (It’s hard to forget the stories of AI bots or algorithms becoming racist or disturbingly morbid after learning from the masses on Twitter and Reddit.)
In the genomics world, the majority of data comes from patients of European descent. To combat that uneven distribution of data, FDNA has worked diligently to create a global network of users and sites in over 130 countries. We have both a free tool and software as a service. Four years after our product launch, more than 50 percent of the patient data contributed is non-Caucasian. Any AI platform is only as good as what it’s being trained on, so as you might expect, when we feed the system with more diverse data, it becomes more robust.
Of course, the more data a company or tool touches, the more concern users have regarding privacy. Fortunately for FDNA, a byproduct of computer vision is de-identified photos, so we’re able to share the information from the data without actually sharing the data. This allows us to spread the benefits of our insights while still protecting patient privacy.
We as companies in this space can learn from one another how to best protect and serve our users and customers. Precision medicine may be powered by AI, but the best applications of these tools require man and machine.