AI in brief Amazon has removed multiple sham books that may have been AI-generated and were published with Jane Friedman as the author – after the real writer complained that someone was misappropriating her name.
Friedman was shocked to see she had been credited as the author of numerous books without having written them. She writes online newsletters focused on digital media and publishing for creatives and authors, and has reported on the industry for decades.
The fake books have titles that look like they’re something she might write, such as How to Write and Publish an eBook Quickly and Make Money or Promote to Prosper: Strategies to Skyrocket Your eBook Sales on Amazon. The titles were also listed on GoodReads, we’re told.
“Whoever’s doing this is obviously preying on writers who trust my name and think I’ve actually written these books. I have not. Most likely they’ve been generated by AI,” she explained. Friedman believes the generic writing style feels like it was made by OpenAI’s ChatGPT or a system similar to it, and that her work can be easily spoofed since there is a lot of it to copy from online.
At first, we’re told, Amazon didn’t respond to her requests to take the books down. According to Friedman, she was then asked for trademark registration numbers. When she said she didn’t have any, again her requests were ignored.
After public outcry, the books were eventually removed from the web giant’s shelves as well as GoodReads.
Friedman said platforms like Amazon need to implement methods that verify the authenticity of authors and their books.
“Unfortunately, even if and when you get these insane books removed from your official profiles, they will still be floating around out there, with your name, on two major sites that get millions of visitors, just waiting to be ‘discovered’. And there’s absolutely nothing you can do about it,” she warned others.
In a statement, Amazon said: “We invest heavily to provide a trustworthy shopping experience and protect customers and authors from misuse of our service.” Goodreads also insisted it will swiftly remove books if necessary after investigating complaints of faked authors.
Speaking of AI and books… A school district in Iowa has used ChatGPT to select 19 books to be removed from its schools’ libraries for kids aged 7 to 12 due to what’s said to be the titles’ sexual content. A law passed this year in the US state requires titles to be “age appropriate,” and with no “descriptions or visual depictions of a sex act” if they are to be available to children.
The Mason City Community School District said it asked ChatGPT to draw up the list of books as it apparently wasn’t feasible for officials to audit every single item in the schools’ collections. So instead, the district asked the AI model if any of the books from a larger list of titles typically challenged over their content had references to sex in them, and if so, they went on the list.
“Based on this review, there are 19 texts that will be removed from our 7-12 school library collections and stored in the administrative center while we await further guidance or clarity. We also will have teachers review classroom library collections,” the district said.
San Francisco greenlights 24/7 Cruise, Waymo robo-taxis
Officials at the California Public Utilities Commission (CPUC) voted to approve Waymo and Cruises’s driverless cars to operate 24/7 in San Francisco.
Pack of GM Cruise robo-taxis freeze, snarl up Friday night traffic amid festival crowds
The 3-1 vote was finally reached after a six-hour hearing last week, and will allow both companies to expand their commercial autonomous taxi fleets. They will be able to pick up and drop off passengers in the US city at any time, in computer-controlled cars with no safety driver present.
Previously, Cruise could only operate its driverless vehicles in SF late at night until the early hours of the morning. Meanwhile, Waymo could charge for rides at any time of the day only if a safety driver was present, and could operate its driverless service but only if it didn’t charge passengers.
The move will no doubt irk the city’s local transit agencies, which have previously complained about the vehicles blocking traffic and driving recklessly. The San Francisco Municipal Transportation Agency urged California to refrain from increasing the number of autonomous vehicles on its streets and to collect more data examining the technology’s safety.
“While we do not yet have the data to judge AVs against the standard human drivers are setting, I do believe in the potential of this technology to increase safety on the roadway,” CPUC Commissioner John Reynolds said in a statement. “Collaboration between key stakeholders in the industry and the first responder community will be vital in resolving issues as they arise in this innovative, emerging technology space.”
AI can tell what you’re typing by listening to the sound of your keyboard
Computer scientists claimed they have developed an AI model that can log keystrokes to steal people’s passwords or messages based on the sound of their typing alone.
Boffins say they can turn typing sounds into text with 95% accuracy
The paper released on arXiv made headlines this month, and described a classifier algorithm that was trained to identify the different buttons pressed on a keyboard on a MacBook Pro. The researchers recorded the sounds of 36 buttons 25 times and mapped the audio waves to each character on the keyboard, and claimed it was 95 percent accurate.
Performing such an attack on real life, as we wrote last week, is much more complicated. It requires hijacking a target’s microphone to listen in on their typing and record the sounds to feed into the model. The researchers said they would have to infect someone’s computer with malware first. The other option is to get their hands on an audio recording of them typing, like a Zoom call transcript.
When they ran the attack on Zoom-recorded data, the researchers’ algorithm’s accuracy at identifying the keystrokes was 94 percent. It’s not clear how robust the algorithm was, and whether the accuracy drops for different types of keyboards. Different typing techniques like touch typing reduces keystroke recognition to 40 percent.
To minimize the possibility that this type of attack can steal passwords, the researchers recommended using a complex, random string of numbers in passwords and two-factor authentication. ®