by Amanda Zink, J.D., M.A.
Hello Barbie, a doll with artificial intelligence (AI) that enables it to “talk” with children, is slated for release this November, just in time for the holiday season.
When a child activates a microphone inside Hello Barbie’s necklace, her words will be recorded and transmitted to computer servers. Speech recognition software will convert the audio message to analyzable text, enabling the “correct” response to be chosen out of thousands of pre-scripted lines.
Given recent articles on this toy in the popular press, I pose the question: Has Mattel taken enough time to think through the problems that could ensue from giving thousands of children a talking friend and uploading their conversations to the Cloud – for the manufacturers, the children’s parents, and maybe even the NSA to hear?
Federal law requires parental consent prior to collection of any data from products used by children under 13. Thus, when Hello Barbie is synced with the required iPhone/Android app, parents must e-sign a consent form explaining how collected data will be used. Parents are then offered a weekly email with links to their child’s audio sessions, which they can listen to and delete from the company’s servers at any time. Hello Barbie has many advocacy groups up in arms, with much of the focus on improper marketing potential and parental spying. The terms of service state simply that recordings “may be used for research and development purposes” such as improving technology and refining the underlying algorithms”.
A recent New York Times article, “Artificially Yours,” describes various challenges the development team has faced, including there being “no doubt [a child using the doll] will ask Barbie all manner of those intimate questions that she wouldn’t ask an adult.” What are the ethical answers to these potentially touchy questions? A writer from ToyTalk—the entertainment company behind the doll’s technology—gave the following scenarios as possibilities:
Child: “Do you believe in God?”
Barbie: “I think a person’s beliefs are very personal to them.”
Child: “I’m getting bullied in school.”
Barbie: “That sounds like something you should talk to a grown-up about.”
Here’s what I’m wondering:
What response has Mattel come up with for when 8-year old Jennie says “My dad comes into my room at night and hurts me?”
How should it reply when the NSA wants an address for 5-year old Trevor, who (not understanding adult humor) tells his Barbie, “My dad wants to plant a bomb at the next Republican debate”?
Will 7-year old Lily have any recourse when her 2048 run for President is upended by a viral clip of her saying “I hate America, I want to join ISIS,” released during the Hello Barbie scandal?
With an expectation that children will share their most burning secrets and questions with Hello Barbie, we can reasonably anticipate such scenarios. Is it ethical to put a child’s conversations on the Internet at all when they are far too young to understand the potential ramifications of having personal information permanently stored in the Cloud?
If Mattel has considered these more dire possibilities, what are they going to do about them? When it comes to preventing future hacks or potential spying, parents would be wise to think about these concerns. Widespread government surveillance is now an accepted reality, and Internet scandals and hacks involving private individuals, politicians, and corporations seem to be occurring with exponential frequency (See Ashley Madison, PigGate, Sony, Target, Justine Sacco, and Vine’s and YouTube’s underage sex scandals, to name a recent few.) Teenagers and adults are getting themselves into enough trouble on the Internet, having repeatedly proven how easily a “what’s the harm” philosophy today can lead to catastrophe down the road. Perhaps we should let our kids hold on to their remaining “innocence” just a bit longer.
As to the remaining issue, while the prospect of Hello Barbie identifying cases of child abuse might seem positive on the surface, below are some questions that require light-shedding:
Does Mattel have a right or an obligation to listen to recordings for implications of crimes committed against children? U.S. child protective services receive 3.4 million referrals of child abuse or neglect each year, and a 2013 JAMA study estimates the proportion of those who experience child maltreatment at some point to be 1 in 4.
Hello Barbie’s user agreement says recordings can’t be used for marketing purposes, but it doesn’t say anything about identifying child molesters.
If Mattel has such a right or obligation, should recordings be prescreened before they are uploaded to servers for Mom and Dad to hear? In some cases, “That sounds like something you should talk to a grown-up about” may be appropriate, but what if the child already is talking to a grown-up – the one who will receive these recordings later in the week – and that grown up is the person harming the child? Eighty percent of child abuse is perpetrated by the child’s parent(s). If material is not prescreened for suggestion of abuse, we could be putting already vulnerable children in the way of further harm when their abusers learn they are “blabbing.” However, it is unclear whether and how quickly data of this magnitude even could be screened – one possibility might be to “flag” certain key words in conjunction with CPS. (An interesting but tangential issue is how a parent could use a recording that suggests abuse by another individual).
The parental right to delete stored conversations also raises questions. Deleted items may be more likely nefarious and could provide an excellent means for identifying incriminating statements parents wish to conceal. But is it ethical to have parents sign a consent form giving them power to “delete” conversations when nothing is ever deleted from the Internet, especially if we might use the very act of deletion against them?
Compounding the moral and practical complexity of these matters is the reality that a child’s words are not always clear – or truthful. Children have difficulty in distinguishing the real from the unreal, an innate desire to test boundaries, and are under no mandate to speak truthfully to a toy. Launching an investigation against a parent for child maltreatment is no small matter. It could impose significant emotional and reputational harm, and a perceived allegation might severely damage familial relationships and trust, even where no wrongdoing has occurred. If we allow recordings to spur investigations, must a threshold of believability or severity be reached? How do we strike the proper balance between insulating children from harm, and avoiding unnecessary harm?
If Mattel adopts no formal policy, should an employee who “accidentally” hears something upsetting look the other way? Do they have to? And worst of all, what happens when a child’s genuine cries for help go unnoticed, and unanswered? Research indicates that while children don’t believe A.I. toys are biologically “alive,” they view them as possessing more humanity than mere devices. Hello Barbie might just be the only “friend” a young, abused, and scared child is able to open up to – a friend who isn’t quite real and presumably won’t get them in trouble, but who just might be able to help. If Hello Barbie doesn’t help that child, what then?
While there is some appeal to acclimating children early on to a technology that is certain to increasingly pervade our future, I fear it is too soon to let the imaginations of little ones run amok with an Internet-connected doll. What will your kids be getting for the holidays this year?