The Jewish community’s use of artificial intelligence is multi-faceted, influencing everything from healthcare, to education, the arts, and of course rabbinic practice, as well as raising concerns around antisemitism. Dr. Maya Ackerman, a computer scientist and AI pioneer, says, “I got to watch the whole revolution from the background, from academia, from inside industry.”
Last year Ackerman conducted the first ever analysis of AI on antisemitism. She used the AI platform Midjourney as her basis for study and found some surprising results. “What was really devastating in it, and other systems, is not familiar antisemitic tropes like big noses.” She says there were new expressions of antisemitism that came up. “For example Magen Davids becoming crosses, which was particularly painful. But my least favorite one was you ask for Passover dinner and you get Arabs eating bread.” She said this was fairly consistent across platforms. “Not only Midjourney. Also Stable Diffusion, which is integrated into about 90% of systems.”
Ackerman said Midjourney has fixed some of its references. “For a while when you would ask for Chanukah, about 10% of the time you would get a German Christmas tree with candles.” The reason for this type of bias in the AI world is what Ackerman says is an honest mirror of Western collective consciousness. “It’s based on Western data. Do Westerners know the difference between Jews and Arabs? Do Westerners know that Chanukah has nothing to do with Christmas trees?” She says the results are revealing. “The converting of a Star of David into a cross? You can’t come up with anything more Western if you look at history.”
Of course, antisemitism is not the only concern or focus for those who use AI. Rabbi Joshua Kullock, of West End Synagogue, says he can see both positive and negative perspectives. “Personally, for what it’s worth, the use of AI, is it good or bad? It’s like asking is science good or bad,” he said, “You can use science to create a cure for cancer, and you can also use science to create an atomic bomb.” He adds that blaming negative outcomes on science is akin to taking away personal responsibility. “It is our responsibility regarding the uses and purpose for which we engage with science or ask for help with ChatGpt.”
Ackerman agrees with Kullock, that it is up to the individual to be responsible in using AI. “As long as you use it believing in your own intellect more than in the brilliance of the machine, you almost can’t go wrong.” In other words, individuals should use their human mind and decision making when using AI.
It is precisely the development of judgement and decision-making ability that concerns some educators. Alene Arnold is associate head of school for teaching and learning at JMS. She says, “We’re at this pivotal moment in history that I think is unique in that once we pass through it we will have gotten control of it we’ll have our arms around it in a similar way that we do to the internet with kids.” She references those early days when in her opinion, “The internet had control of us, rather than the other way around.” But today, there are best practices and a comfort level with it that didn’t exist in the beginning.
Much like the internet, Arnold says the work now is where to use AI. “Every educator, every teacher, really every adult when it comes to school usage has a good sense of that.” She says the real challenge is that the students do not have a good sense of that. “They were raised on the internet so their natural recoil against it is not there at all.”
At Jewish Middle School the educators are developing norms and best practices, which utilize critical thinking. “If you are questioning whether to use AI, the answer is ‘no.’” She adds that students must have express permission to use it. She concedes using AI for research feels a little easier, but students are encouraged to dig deeper into the sources AI is using, much like using Wikipedia, for which the students are required to also check those sources that are cited. This approach is exactly what Ackerman suggests when she talks about using individual intellect and responsibility.
Arnold says the school does allow students to use AI for things like time management and mapping pathways to completing projects. She says the teachers use it for mapping curriculum and encourages her team to model responsible behavior. “I look at it and I correct it and I still use my critical thinking skills.”
Another field facing challenges from AI is in the arts. Jeremy Brook is an entertainment attorney who works in the music, film, and television industry. He also is the creator of a startup company called VINIL, which stands for Voice Identity Name Image Likeness. The company’s technology works to counter what he says is the problem with deep fakes, the practice of using AI generated voice and images of celebrities and performers. “The whole goal of it is to protect people’s likeness.”
Brook says while the practice is devastating to artists, there is a lot of nuances in this area of law. “In any intellectual property or related area, I always describe it as a vast sea of gray. And everything is heavily fact dependent.” He says it is not a one size fits all issue. He cites an example of trying to sell a copy of the Mona Lisa painting as the original, which he says would be fraud.
The Jewish Observer is published by The Jewish Federation of Greater Nashville and made possible by funds raised in the Jewish Federation Annual Campaign. Become a supporter today.
But if someone sells a copy, and it is identified as a copy, that is not a crime since the painting is in the public domain. “It highlights how in all of these areas of law there is a balancing test. On the one hand you want to protect creators and their work. But on the other hand, we have society’s interest in free and open expression.” Where things cross the line, says Brook, is when someone’s likeness is being used to sell something or for an endorsement or advertisement. “That sort of thing is generally considered to be off limits.”
Brook explains that this area of law extends back nearly 100 years, beginning with privacy rights and expanded into publicity law. “If you’re a celebrity, your likeness has more value than the average Joe. So other people shouldn’t be able to exploit your likeness for commercial gain without your approval.” The area of law developed state by state. In Tennessee, it was important because of the burgeoning music and entertainment industry. This fractured approach leaves many inconsistencies. “There is an effort by the Uniform Law Commission to determine whether there should be a uniform law across all 50 states.” Brook is sitting on that commission as an observer.
New challenges appeared with the development of AI which can provide what Brook says is a “hyper realistic” image. “It’s very easy to do that and the technology is now the worst it will ever be in terms of quality, and it’s moving very rapidly.”
He cites an April 2023 case where a song, Heart on My Sleeve, was released presumably by artists Drake and The Weekend. It turned out both performers’ voices were completely AI generated. “This was the moment of crossing the Rubicon,” says Brook. That incident is what inspired Brook’s company, which now applies certificates of authenticity on authorized content, a pre-emptive move designed to protect artists.
In the area of health care, specifically mental health, practitioners are also facing challenges around privacy. Allie Krew is a licensed therapist in private practice in Nashville. She finds two main uses for AI in her work. The first is electronic health records, which assists health professionals with note writing. This means the therapists’ laptop is on during the appointment and is essentially taking notes. The therapist still has to review the notes and sign off, but Krew still finds the practice troubling. “It’s listening in the room with you when you are doing your appointments.” She says the use of AI in this way can be beneficial for therapists with disabilities, so she concedes there is a meaningful use. “I have a really hard time with the use of that technology being used freely when we’re not being discerning.”
The other more troubling use is patients’ use of virtual, or AI therapists. “I did not anticipate that coming up in my work. I just didn’t think the folks I work with would use it in any way, but I’ve had people acknowledge they will be feeling something or going through something between appointments, and they’ll put it into AI and get feedback,” says Krew. What this amounts to, she says, is people using AI as their therapist. Most troubling is the personal nature of therapy and the risks of interacting with software programmed with conditioned responses. She cites recent news reports of teens being told to self-harm by their AI therapist. “I just cannot trust it and it makes me really anxious that people are using this as a resource for something that is so deeply personal.” But she says it can be complicated to push back when patients say they find value in the experience. She adds that to date, the technology does not appear programmed for safety by detecting certain buzz words.
Most people now experience the use of AI in political campaigns. Jacob Kleinrock is a political consultant. He says there are ways AI is helping make his job better, but there are also more sinister uses. “I don’t think I’m using AI to the full extent I could,” he says, “We use it to help writing solicitation letters to improve outcomes.” He does say most of the time, he has to edit those letters because it is obvious when AI is being used. Kleinrock says there are times when AI is useful in creating spreadsheets and other types of forms. “We used to have interns doing line by line entry, so this is more efficient.”
He says it is getting harder to distinguish deep fakes from real portrayals. “It’s funny when it’s bunny rabbits jumping on a trampoline. It’s not funny when it’s the president of the United States declaring war.” Kleinrock says right now it is easy to tell what is AI and what is not. But he says eventually the technology will be improved to the point of being indistinguishable from reality. “It’s not only just looking at the source to see if it came from ABC News or from OAN.”
Most people agree the future of AI is still to be written. Kleinrock says it’s the tip of the iceberg. “I think we’re in for a reckoning one way or another. I wouldn’t be opposed to them passing a law to put the brakes on it or ban it. Unemployment is going to skyrocket because of jobs being lost.” He also finds AI to be what he calls a “pre-partisan” issue. “It’s not necessarily Democrats are against, and Republicans are for or vice versa. It feels like our elected officials are figuring it out as they go.”
At the end of the day, Ackerman stresses when it comes to the influence of AI on the Jewish community specifically, there needs to be serious changes at the corporate level by working with AI companies. “This is a new world, and this is where we need to concentrate,” she says, “Even if it’s [AI] not apparently making things worse today, when in history did we have a chance to get in and control our own narrative, even a little bit.” And in the wake of October 7th, she says even though this work might be hard, it is not as hard as what we have already experienced. “We have got to do everything we can to minimize Jewish deaths.”