IBM is looking to sell its IBM Watson Health business, which generates around $1B annual revenue, the WSJ reports. Watson Health was expected to be one of the company's major moves into AI and healthcare. However, physicians' reluctance to adopt the technology and problems in collecting and analyzing data under medical privacy laws could have led to the division's lack of growth.
More:
- Watson Health brands include Merge Healthcare, Truven Health Analytics, and Phytel, which it acquired for an overall cost of $3.8B. The company is considering options, including a SPAC merger and selling the unit to a private equity firm.
- On Oct. 8, 2020, IBM announced that it would spin-off its managed infrastructures unit as a separate company to focus on its cloud platform led by Red Hat and growing markets like AI and hybrid clouds. Post-separation, more than half of IBM's revenue will come from subscriptions rather than services. IBM acquired Red Hat in 2019 for $34B.
- In a statement, IBM said work on Watson Health started nearly a decade ago, when the AI revolution was in its infancy. It said IBM is "continuing to evolve the Watson Health business, based on our decade of experience, to meet the needs of patients and physicians." It did not comment on the potential sale.
- As the WSJ notes, trying to apply AI techniques to diagnose or treat complex health issues is a huge challenge, in part because of a lack of access to broad data. Algorithms trained on one population cannot often be applied to others. Tech companies also lack some of the deep knowledge required to truly understand how healthcare works.
- In addition, Alphabet's DeepMind is also yet to turn a profit. Google’s parent company has written off at least $1.5B in debt for its AI research lab, which saw its losses grow by 1.5% in 2019 compared to 2018.
A version of this story first appeared in today's Inside Business. You can read the full issue here.
THE WALL STREET JOURNAL
|
|
Google says it fired Margaret Mitchell, co-lead of its ethical AI team, due to "multiple violations" of its code of conduct. This comes several weeks after Google opened a review of Mitchell, based on claims that she shared thousands of internal company files with outsiders.
More:
- Mitchell co-led the Ethical AI team with Timnit Gebru, a prominent AI researcher who left in December after Google asked her to retract an AI research paper she co-authored. Gebru had voiced frustration at managers' requests to retract the paper, as well as its treatment of women and minorities, particularly in hiring, which generated support from colleagues and people online.
- After Gebru's departure, Mitchell tweeted that she was documenting “current critical issues" related to Gebru's exit "point by point, inside and outside work.” She has also criticized Google's AI chief Jeffrey Dean and other company heads for ousting Gebru.
- A source told Axios that Mitchell used automated scripts to search through messages for instances of discriminatory treatment toward Gebru. She was later suspended while Google conducted an internal review of the claims.
- On Friday, Mitchell wrote in a tweet, "I'm fired." Her tweet came a day after Google announced a restructuring of its responsible AI efforts, which includes engineering VP Marian Croak overseeing "a new center of expertise" at Google Research for the responsible development of AI technologies.
ZDNET
|
|
Autonomous vehicles are "highly vulnerable" to attacks, including those led by adversarial machine learning systems, according to a new European Union Agency for Cybersecurity report. The EU agency specifically mentions how an attacker could manipulate image recognition in AVs to make them mislabel pedestrians, potentially striking them in crosswalks, for example.
More:
- These types of adversarial attacks would also fool motion planning and decision-making algorithms and other systems. The study also mentions "back-end malicious activity" and sensor attacks using "beams of light."
- A widely cited study from 2017, for instance, showed that researchers could trick an AV to misidentify a stop sign as a speed limit sign by placing a sticker over it. Tencent researchers conducted a similar study in 2019, using stickers to make Tesla’s Autopilot system enter the wrong lane.
- As a result, the EU agency says car manufacturers should move toward creating machine learning systems to counteract attacks and mitigate security risks. It also urges businesses and policymakers to foster a "security culture" in the auto supply chain.
- It concludes: "AI systems should be designed, implemented, and deployed by teams where the automotive domain expert, the ML expert, and the cybersecurity expert collaborate.”
VENTUREBEAT
|
|
David Cox, director of the MIT-IBM Watson AI Lab, says his name was falsely included in at least two research papers out of China. The finding, first reported by Wired, shows the potential for academic fraud especially in the complex competitive field of AI.
More:
- Cox says he found his name listed as a co-author in two papers alongside three researchers in China. Wired found a third example from the same Chinese authors, where another MIT researcher was listed under a fictitious name.
- Cox says such fraud could help researchers increase their chances for publication or boost academic prestige, particularly when linked to high-level Western institutions such as MIT.
- It also highlights problems in general academic publishing. In AI and computer science especially, there is a "lack of rules around the publishing of papers," where many are published on the pre-print arXiv server without peer review.
WIRED
|
|
Google has chosen the first dozen companies to benefit from its Voice AI startup accelerator. The 10-week program will provide AI voice-related startups with Google experts, products, and programs to help them advance.
More:
- Among the startups chosen are Toronto-based Babbly, developer of a platform to improve children's speech and language skills; Powow AI, which uses AI to analyze and transcribe meetings; and Oto.AI, developer of an acoustic engine that can provide insights from voice streams.
- Google will help the startups with "product design, technical infrastructure, customer acquisition, and leadership development" through its companywide network of executives and mentors, according to a Google blog post.
- Voice AI has many similarities to Amazon’s Alexa Accelerator for early-stage voice startups. Google's program officially kicks off March 15, with companies showing off their final results in a live-streamed demo on May 20.
VENTUREBEAT
|
|
Amazon has partnered with MIT to improve and optimize its delivery routes for drivers. The entities are sponsoring a competition that asks academic teams to develop machine learning models, which will be able to predict delivery routes selected by experienced drivers.
More:
- The idea is to further Amazon's efforts to find the most efficient, safe, and sustainable delivery routes for drivers.
- Amazon will provide data, such as delivery locations, package dimensions, and travel times, to help the teams train their models, which are meant to optimize routes based on more specific "driver know-how."
- While Amazon already uses routing algorithms in its deliveries, the competition is meant to add new variables, data, and other changes that could improve them and make them more useful.
- The competition is led by Amazon’s Last Mile team and the MIT Center for Transportation & Logistics, or CTL. Amazon says it could interview well-performing team members for research positions in Last Mile, the company's developer of planning software for delivery fleets.
- First-place winners will receive $100K, followed by $50K for second place, and $25K for third. CTL will also publish papers describing the top-performing models.
This story first appeared in Inside Amazon. You can read the full issue here.
AMAZON BLOG
|
|
Youyang Gu
Read This Thing
Every month, Inside AI will share an interesting, informative article or book we think you might enjoy. Today's comes from Bloomberg's Ashlee Vance, who wrote an in-depth report about data scientist Youyang Gu's model for forecasting COVID-19.
Shortly after the pandemic started, Gu noticed that large forecasting models were proving to be inaccurate. One built by the Institute for Health Metrics and Evaluation (IHME), for example, had predicted 60,000 U.S. COVID-19 deaths by August, though the actual figure turned out to be 160,000. The inaccuracies prompted Gu — then a 26-year-old MIT grad living with his parents in Santa Clara — to develop his own COVID-19 death predictor and website.
Gu's model, though simple, used machine learning algorithms that resulted in more precise forecasts. While other models relied on more data sources, Gu said he chose "to rely on past deaths to predict future deaths." Before long, it churned out more accurate predictions than large-scale models backed by multi-million dollar organizations. His website eventually gained millions of daily followers, who saw how closely the actual death figures matched his model's forecasts.
Christopher Murray, the director of IHME, says that once the organization got a better handle on the virus after April, its forecasts radically improved.
But that spring, week by week, more people started to pay attention to Gu’s work. He flagged his model to reporters on Twitter and e-mailed epidemiologists, asking them to check his numbers. Toward the end of April, the prominent University of Washington biologist Carl Bergstrom tweeted about Gu’s model, and not long after that the U.S. Centers for Disease Control and Prevention included Gu’s numbers on its Covid forecasting website. As the pandemic progressed, Gu, a Chinese immigrant who grew up in Illinois and California, found himself taking part in regular meetings with the CDC and teams of professional modelers and epidemiologists, as everyone tried to improve their forecasts.
Traffic to Gu’s website exploded, with millions of people checking in daily to see what was happening in their states and the U.S. overall. More often than not, his predicted figures ended up hugging the line of actual death figures when they arrived a few weeks later.
Read "The 27-Year-Old Who Became a Covid-19 Data Superstar."
|
|
QUICK HITS:
- This company is reinventing home and renters insurance. Get a quote to see how much you could save.*
- An information gap exists between AI creators and government policymakers, who need more knowledge on the subject to correctly regulate AI, according to AI policy researcher Adriana Bora and AI author David Alexandru Timis.
- Nanit, developer of a nursery camera with computer vision capabilities, raised $25M in a funding round led by GV, formerly Google Ventures.
- Rensselaer Polytechnic Institute computer scientists proposed the idea for an AI system that could analyze and predict the intent of a gun user and, if improper, render the firearm inert.
- In an opinion piece for Bloomberg, mathematician Cathy O'Neil argues that neither humans nor AI are capable of effectively monitoring social media sites. AI did cover "for the inevitable failure of user moderation," she writes, but now "official or outsourced moderation is supposed to be covering for the inevitable failure of AI."
- What are the costs of inefficiencies in your product release cycle? Read how top companies save over $1M each year.*
*This is sponsored content.
|
|
|
|
|
Beth is a former investigative reporter for The Arizona Republic who authored a book about the U.S. solar industry. A graduate of the Walter Cronkite School of Journalism, she won a First Amendment Award and a Pulitzer Prize nomination for her co-reporting on the rising costs of Arizona's taxpayer-funded pension systems.
|
|
Editor
|
Charlotte Hayes-Clemens is an editor and writer based in Vancouver. She has dabbled in both the fiction and non-fiction world, having worked at HarperCollins Publishers and more recently as a writing coach for new and self-published authors. Proper semi-colon usage is her hill to die on.
|
|