Google Brainwas adeep learningartificial intelligenceresearch team that served as the sole AI branch of Google before being incorporated under the newer umbrella ofGoogle AI,a research division at Google dedicated to artificial intelligence. Formed in 2011, it combined open-ended machine learning research with information systems and large-scale computing resources.[1]It created tools such asTensorFlow,which allow neural networks to be used by the public, and multiple internal AI research projects,[2]and aimed to create research opportunities inmachine learningandnatural language processing.[2]It was merged into former Google sister company DeepMind to formGoogle DeepMindin April 2023.

Google Brain
Company typeArtificial intelligenceandmachine learning
FounderAndrew Ng
Greg Corrado
DefunctApril 2023
SuccessorGoogle DeepMind
HeadquartersMountain View, California
Websiteresearch.google/brain
(archived May 2023)

History

edit

The Google Brain project began in 2011 as a part-time research collaboration between Google fellowJeff Deanand Google Researcher Greg Corrado.[3]Google Brain started as aGoogle Xproject and became so successful that it was graduated back to Google:Astro Tellerhas said that Google Brain paid for the entire cost ofGoogle X.[4]

In June 2012, theNew York Timesreported that a cluster of 16,000processorsin 1,000computersdedicated to mimicking some aspects ofhuman brain activityhad successfully trained itself to recognize acatbased on 10 million digital images taken fromYouTubevideos.[3]The story was also covered byNational Public Radio.[5]

In March 2013, Google hiredGeoffrey Hinton,a leading researcher in thedeep learningfield, and acquired the company DNNResearch Inc. headed by Hinton. Hinton said that he would be dividing his future time between his university research and his work at Google.[6]

In April 2023, Google Brain merged with Google sister company DeepMind to formGoogle DeepMind,as part of the company's continued efforts to accelerate work on AI.[7]

Team and location

edit

Google Brain was initially established by Google FellowJeff Deanand visiting Stanford professorAndrew Ng.In 2014, the team includedJeff Dean,Quoc Le,Ilya Sutskever,Alex Krizhevsky,Samy Bengio,and Vincent Vanhoucke. In 2017, team members included Anelia Angelova,Samy Bengio,Greg Corrado, George Dahl, Michael Isard, Anjuli Kannan, Hugo Larochelle, Chris Olah, Salih Edneer, Benoit Steiner, Vincent Vanhoucke, Vijay Vasudevan, andFernanda Viegas.[8]Chris Lattner,who createdApple's programming languageSwiftand then ranTesla's autonomy team for six months, joined Google Brain's team in August 2017.[9]Lattner left the team in January 2020 and joinedSiFive.[10]

As of 2021,Google Brain was led byJeff Dean,Geoffrey Hinton,andZoubin Ghahramani.Other members include Katherine Heller, Pi-Chuan Chang, Ian Simon, Jean-Philippe Vert, Nevena Lazic, Anelia Angelova, Lukasz Kaiser, Carrie Jun Cai, Eric Breck, Ruoming Pang, Carlos Riquelme, Hugo Larochelle, and David Ha.[8]Samy Bengioleft the team in April 2021,[11]andZoubin Ghahramanitook on his responsibilities.

Google Research includes Google Brain and is based inMountain View, California.It also has satellite groups inAccra,Amsterdam,Atlanta,Beijing,Berlin,Cambridge (Massachusetts),Israel,Los Angeles,London,Montreal,Munich,New York City,Paris,Pittsburgh,Princeton,San Francisco,Seattle,Tokyo,Toronto,andZürich.[12]

Projects

edit

Artificial-intelligence-devised encryption system

edit

In October 2016, Google Brain designed an experiment to determine thatneural networksare capable of learning securesymmetric encryption.[13]In this experiment, threeneural networkswere created: Alice, Bob and Eve.[14]Adhering to the idea of agenerative adversarial network(GAN), the goal of the experiment was for Alice to send an encrypted message to Bob that Bob could decrypt, but the adversary, Eve, could not.[14]Alice and Bob maintained an advantage over Eve, in that they shared akeyused forencryptionanddecryption.[13]In doing so, Google Brain demonstrated the capability ofneural networksto learn secureencryption.[13]

Image enhancement

edit

In February 2017, Google Brain determined aprobabilistic methodfor converting pictures with 8x8resolutionto a resolution of 32x32.[15][16]The method built upon an already existing probabilistic model called pixelCNN to generate pixel translations.[17][18]

The proposed software utilizes twoneural networksto make approximations for thepixelmakeup of translated images.[16][19]The first network, known as the "conditioning network," downsizeshigh-resolutionimages to 8x8 and attempts to create mappings from the original 8x8 image to these higher-resolution ones.[16]The other network, known as the "prior network," uses the mappings from the previous network to add more detail to the original image.[16]The resulting translated image is not the same image in higher resolution, but rather a 32x32 resolution estimation based on other existing high-resolution images.[16]Google Brain's results indicate the possibility for neural networks to enhance images.[20]

Google Translate

edit

The Google Brain team contributed to theGoogle Translateproject by employing a new deep learning system that combines artificial neural networks with vast databases ofmultilingualtexts.[21]In September 2016,Google Neural Machine Translation(GNMT) was launched, an end-to-end learning framework, able to learn from a large number of examples.[21]Previously, Google Translate's Phrase-Based Machine Translation (PBMT) approach would statistically analyze word by word and try to match corresponding words in other languages without considering the surrounding phrases in the sentence.[22]But rather than choosing a replacement for each individual word in the desired language, GNMT evaluates word segments in the context of the rest of the sentence to choose more accurate replacements.[2]Compared to older PBMT models, the GNMT model scored a 24% improvement in similarity to human translation, with a 60% reduction in errors.[2][21]The GNMT has also shown significant improvement for notoriously difficult translations, likeChinesetoEnglish.[21]

While the introduction of the GNMT has increased the quality of Google Translate's translations for the pilot languages, it was very difficult to create such improvements for all of its 103 languages. Addressing this problem, the Google Brain Team was able to develop aMultilingualGNMTsystem, which extended the previous one by enabling translations between multiple languages. Furthermore, it allows for Zero-Shot Translations, which are translations between two languages that the system has never explicitly seen before.[23]Google announced that Google Translate can now also translate without transcribing, using neural networks. This means that it is possible to translate speech in one language directly into text in another language, without first transcribing it to text.

According to the Researchers at Google Brain, this intermediate step can be avoided using neural networks. In order for the system to learn this, they exposed it to many hours of Spanish audio together with the corresponding English text. The different layers of neural networks, replicating the human brain, were able to link the corresponding parts and subsequently manipulate the audio waveform until it was transformed to English text.[24]Another drawback of the GNMT model is that it causes the time of translation to increase exponentially with the number of words in the sentence.[2]This caused the Google Brain Team to add 2000 more processors to ensure the new translation process would still be fast and reliable.[22]

Robotics

edit

Aiming to improve traditional robotics control algorithms where new skills of a robot need to behand-programmed,robotics researchers at Google Brain are developingmachine learningtechniques to allow robots to learn new skills on their own.[25]They also attempt to develop ways for information sharing between robots so that robots can learn from each other during their learning process, also known ascloud robotics.[26]As a result, Google has launched the Google Cloud Robotics Platform for developers in 2019, an effort to combinerobotics,AI,and thecloudto enable efficient robotic automation through cloud-connected collaborative robots.[26]

Robotics research at Google Brain has focused mostly on improving and applying deep learning algorithms to enable robots to complete tasks by learning from experience, simulation, human demonstrations, and/or visual representations.[27][28][29][30]For example, Google Brain researchers showed that robots can learn to pick and throw rigid objects into selected boxes by experimenting in an environment without being pre-programmed to do so.[27]In another research, researchers trained robots to learn behaviors such as pouring liquid from a cup; robots learned from videos of human demonstrations recorded from multiple viewpoints.[29]

Google Brain researchers have collaborated with other companies and academic institutions on robotics research. In 2016, the Google Brain Team collaborated with researchers atXin a research on learning hand-eye coordination for robotic grasping.[31]Their method allowed real-time robot control for grasping novel objects with self-correction.[31]In 2020, researchers from Google Brain, Intel AI Lab, and UC Berkeley created an AI model for robots to learn surgery-related tasks such as suturing from training with surgery videos.[30]

Interactive Speaker Recognition with Reinforcement Learning

edit

In 2020, Google Brain Team andUniversity of Lillepresented a model for automatic speaker recognition which they called Interactive Speaker Recognition. The ISR module recognizes a speaker from a given list of speakers only by requesting a few user specific words.[32]The model can be altered to choose speech segments in the context ofText-To-SpeechTraining.[32]It can also prevent malicious voice generators from accessing the data.[32]

TensorFlow

edit

TensorFlow is an open source software library powered by Google Brain that allows anyone to utilize machine learning by providing the tools to train one's own neural network.[2]The tool has been used to develop software using deep learning models that farmers use to reduce the amount of manual labor required to sort their yield, by training it with a data set of human-sorted images.[2]

Magenta

edit

Magenta is a project that uses Google Brain to create new information in the form of art and music rather than classify and sort existing data.[2]TensorFlowwas updated with a suite of tools for users to guide theneural networkto create images and music.[2]However, the team fromValdosta State Universityfound that theAIstruggles to perfectly replicate human intention inartistry,similar to the issues faced intranslation.[2]

Medical applications

edit

The image sorting capabilities of Google Brain have been used to help detect certain medical conditions by seeking out patterns that human doctors may not notice to provide an earlier diagnosis.[2]During screening for breast cancer, this method was found to have one quarter the false positive rate of human pathologists, who require more time to look over each photo and cannot spend their entire focus on this one task.[2]Due to the neural network's very specific training for a single task, it cannot identify other afflictions present in a photo that a human could easily spot.[2]

Transformer

edit

Thetransformerdeep learning architecture was invented by Google Brain researchers in 2017, and explained in the scientific paperAttention Is All You Need.[33]Google owns apatenton this widely used architecture, but hasn't enforced it.[34][35]

Text-to-image model

edit
Example of an image generated by Imagen 3.0

Google Brain announced in 2022 that it created two different types oftext-to-image modelscalled Imagen and Parti that compete withOpenAI'sDALL-E.[36][37]

Later in 2022, the project was extended to text-to-video.[38]

Imagen development was transferred toGoogle Deepmindafter the merger with Deepmind.[39]

Other Google products

edit

The Google Brain projects' technology is currently used in various other Google products such as theAndroid Operating System'sspeech recognition system,photo search forGoogle Photos,smart reply inGmail,and video recommendations inYouTube.[40][41][42]

Reception

edit

Google Brain has received coverage inWired,[43][44][45]NPR,[5]andBig Think.[46]These articles have contained interviews with key team members Ray Kurzweil and Andrew Ng, and focus on explanations of the project's goals and applications.[43][5][46]

Controversies

edit

In December 2020, AI ethicistTimnit Gebruleft Google.[47]While the exact nature of her quitting or being fired is disputed, the cause of the departure was her refusal to retract a paper entitled "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?"and a related ultimatum she made, setting conditions to be met otherwise she would leave.[47]This paper explored potential risks of the growth of AI such as Google Brain, including environmental impact, biases in training data, and the ability to deceive the public.[47][48]The request to retract the paper was made by Megan Kacholia, vice president of Google Brain.[49]As of April 2021, nearly 7000 current or former Google employees and industry supporters have signed an open letter accusing Google of "research censorship" and condemning Gebru's treatment at the company.[50]

In February 2021, Google fired one of the leaders of the company's AI ethics team,Margaret Mitchell.[49]The company's statement alleged that Mitchell had broken company policy by using automated tools to find support for Gebru.[49]In the same month, engineers outside the ethics team began to quit, citing the termination of Gebru as their reason for leaving.[51]In April 2021, Google Brain co-founderSamy Bengioannounced his resignation from the company.[11]Despite being Gebru's manager, Bengio was not notified before her termination, and he posted online in support of both her and Mitchell.[11]While Bengio's announcement focused on personal growth as his reason for leaving, anonymous sources indicated to Reuters that the turmoil within the AI ethics team played a role in his considerations.[11]

In March 2022, Google fired AI researcher Satrajit Chatterjee after he questioned the findings of a paper published inNature,by Google's AI team members, Anna Goldie and Azalia Mirhoseini.[52][53]This paper reported good results from the use of AI techniques (in particular reinforcement learning) for theplacement problemforintegrated circuits.[54]However, this result is quite controversial,[55][56][57]as the paper does not contain head-to-head comparisons to existing placers, and is difficult to replicate due to proprietary content. At least one initially favorable commentary has been retracted upon further review,[58]and the paper is under investigation by Nature.[59]

See also

edit

References

edit
  1. ^"What is Google Brain?".GeeksforGeeks.February 6, 2020.Archivedfrom the original on April 22, 2022.RetrievedApril 9,2021.
  2. ^abcdefghijklmHelms, Mallory; Ault, Shaun V.; Mao, Guifen; Wang, Jin (March 9, 2018)."An Overview of Google Brain and Its Applications".Proceedings of the 2018 International Conference on Big Data and Education.ICBDE '18. Honolulu, HI, USA: Association for Computing Machinery. pp.72–75.doi:10.1145/3206157.3206175.ISBN978-1-4503-6358-7.S2CID44107806.Archivedfrom the original on May 4, 2021.RetrievedApril 8,2021.
  3. ^abMarkoff, John(June 25, 2012)."How Many Computers to Identify a Cat? 16,000".The New York Times.Archivedfrom the original on May 9, 2017.RetrievedFebruary 11,2014.
  4. ^Conor Dougherty (February 16, 2015)."Astro Teller, Google's 'Captain of Moonshots,' on Making Profits at Google X".Archivedfrom the original on October 22, 2015.RetrievedOctober 25,2015.
  5. ^abc"A Massive Google Network Learns To Identify — Cats".National Public Radio.June 26, 2012.Archivedfrom the original on June 13, 2021.RetrievedFebruary 11,2014.
  6. ^"U of T neural networks start-up acquired by Google"(Press release). Toronto, ON. March 12, 2013.Archivedfrom the original on October 8, 2019.RetrievedMarch 13,2013.
  7. ^Roth, Emma; Peters, Jay (April 20, 2023)."Google's big AI push will combine Brain and DeepMind into one team".The Verge.Archivedfrom the original on April 20, 2023.RetrievedApril 21,2023.
  8. ^ab"Brain Team – Google Research".Google Research.Archivedfrom the original on October 2, 2021.RetrievedApril 8,2021.
  9. ^Etherington, Darrell (August 14, 2017)."Swift creator Chris Lattner joins Google Brain after Tesla Autopilot stint".TechCrunch.Archivedfrom the original on August 19, 2021.RetrievedOctober 11,2017.
  10. ^"Former Google and Tesla Engineer Chris Lattner to Lead SiFive Platform Engineering Team".businesswire.January 27, 2020.Archivedfrom the original on June 3, 2021.RetrievedApril 9,2021.
  11. ^abcdDave, Jeffrey Dastin, Paresh (April 7, 2021)."Google AI scientist Bengio resigns after colleagues' firings: email".Reuters.Archivedfrom the original on June 2, 2021.RetrievedApril 8,2021.{{cite news}}:CS1 maint: multiple names: authors list (link)
  12. ^"Build for Everyone – Google Careers".careers.google.Archivedfrom the original on October 5, 2021.RetrievedApril 8,2021.
  13. ^abcZhu, Y.; Vargas, D. V.; Sakurai, K. (November 2018)."Neural Cryptography Based on the Topology Evolving Neural Networks".2018 Sixth International Symposium on Computing and Networking Workshops (CANDARW).pp.472–478.doi:10.1109/CANDARW.2018.00091.ISBN978-1-5386-9184-7.S2CID57192497.Archivedfrom the original on June 2, 2021.RetrievedApril 9,2021.
  14. ^abAbadi, Martín; Andersen, David G. (2016). "Learning to Protect Communications with Adversarial Neural Cryptography".ICLR.arXiv:1610.06918.Bibcode:2016arXiv161006918A.
  15. ^Dahl, Ryan; Norouzi, Mohammad; Shlens, Jonathon (2017). "Pixel Recursive Super Resolution".ICCV.arXiv:1702.00783.Bibcode:2017arXiv170200783D.
  16. ^abcde"Google Brain super-resolution image tech makes" zoom, enhance! "real".arstechnica.co.uk.February 7, 2017.Archivedfrom the original on July 13, 2021.RetrievedMay 15,2017.
  17. ^Bulat, Adrian; Yang, Jing; Tzimiropoulos, Georgios (2018),"To Learn Image Super-Resolution, Use a GAN to Learn How to do Image Degradation First",Computer Vision – ECCV 2018,Lecture Notes in Computer Science, vol. 11210, Cham: Springer International Publishing, pp.187–202,arXiv:1807.11458,doi:10.1007/978-3-030-01231-1_12,ISBN978-3-030-01230-4,S2CID51882734,archivedfrom the original on December 26, 2021,retrievedApril 9,2021
  18. ^Oord, Aaron Van; Kalchbrenner, Nal; Kavukcuoglu, Koray (June 11, 2016)."Pixel Recurrent Neural Networks".International Conference on Machine Learning.PMLR:1747–1756.arXiv:1601.06759.Archivedfrom the original on October 1, 2021.RetrievedApril 9,2021.
  19. ^"Google uses AI to sharpen low-res images".engadget.February 7, 2017.Archivedfrom the original on May 2, 2021.RetrievedMay 15,2017.
  20. ^"Google just made 'zoom and enhance' a reality – kinda".cnet.Archivedfrom the original on September 5, 2021.RetrievedMay 15,2017.
  21. ^abcdCastelvecchi, Davide (2016)."Deep learning boosts Google Translate tool".Nature News.doi:10.1038/nature.2016.20696.S2CID64308242.Archivedfrom the original on November 8, 2020.RetrievedApril 8,2021.
  22. ^abLewis-Kraus, Gideon (December 14, 2016)."The Great A.I. Awakening".The New York Times.ISSN0362-4331.Archivedfrom the original on May 5, 2017.RetrievedApril 8,2021.
  23. ^Johnson, Melvin; Schuster, Mike; Le, Quoc V.; Krikun, Maxim; Wu, Yonghui; Chen, Zhifeng; Thorat, Nikhil; Viégas, Fernanda; Wattenberg, Martin; Corrado, Greg; Hughes, Macduff (October 1, 2017)."Google's Multilingual Neural Machine Translation System: Enabling Zero-Shot Translation".Transactions of the Association for Computational Linguistics.5:339–351.arXiv:1611.04558.doi:10.1162/tacl_a_00065.ISSN2307-387X.
  24. ^Reynolds, Matt."Google uses neural networks to translate without transcribing".New Scientist.Archivedfrom the original on April 18, 2021.RetrievedMay 15,2017.
  25. ^Metz, Cade; Dawson, Brian; Felling, Meg (March 26, 2019)."Inside Google's Rebooted Robotics Program".The New York Times.ISSN0362-4331.Archivedfrom the original on September 16, 2021.RetrievedApril 8,2021.
  26. ^ab"Google Cloud Robotics Platform coming to developers in 2019".The Robot Report.October 24, 2018.Archivedfrom the original on August 26, 2021.RetrievedApril 8,2021.
  27. ^abZeng, A.; Song, S.; Lee, J.; Rodriguez, A.; Funkhouser, T. (August 2020)."TossingBot: Learning to Throw Arbitrary Objects With Residual Physics".IEEE Transactions on Robotics.36(4):1307–1319.arXiv:1903.11239.doi:10.1109/TRO.2020.2988642.ISSN1941-0468.
  28. ^Gu, S.; Holly, E.; Lillicrap, T.; Levine, S. (May 2017)."Deep reinforcement learning for robotic manipulation with asynchronous off-policy updates".2017 IEEE International Conference on Robotics and Automation (ICRA).pp.3389–3396.arXiv:1610.00633.doi:10.1109/ICRA.2017.7989385.ISBN978-1-5090-4633-1.S2CID18389147.Archivedfrom the original on May 13, 2021.RetrievedApril 8,2021.
  29. ^abSermanet, P.; Lynch, C.; Chebotar, Y.; Hsu, J.; Jang, E.; Schaal, S.; Levine, S.; Brain, G. (May 2018)."Time-Contrastive Networks: Self-Supervised Learning from Video".2018 IEEE International Conference on Robotics and Automation (ICRA).pp.1134–1141.arXiv:1704.06888.doi:10.1109/ICRA.2018.8462891.ISBN978-1-5386-3081-5.S2CID3997350.Archivedfrom the original on May 4, 2021.RetrievedApril 8,2021.
  30. ^abTanwani, A. K.; Sermanet, P.; Yan, A.; Anand, R.; Phielipp, M.; Goldberg, K. (May 2020)."Motion2Vec: Semi-Supervised Representation Learning from Surgical Videos".2020 IEEE International Conference on Robotics and Automation (ICRA).pp.2174–2181.arXiv:2006.00545.doi:10.1109/ICRA40945.2020.9197324.ISBN978-1-7281-7395-5.S2CID219176734.Archivedfrom the original on May 4, 2021.RetrievedApril 8,2021.
  31. ^abLevine, Sergey; Pastor, Peter; Krizhevsky, Alex; Ibarz, Julian; Quillen, Deirdre (April 1, 2018)."Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection".The International Journal of Robotics Research.37(4–5):421–436.arXiv:1603.02199.doi:10.1177/0278364917710318.ISSN0278-3649.
  32. ^abcSeurin, Mathieu; Strub, Florian; Preux, Philippe; Pietquin, Olivier (October 25, 2020)."A Machine of Few Words: Interactive Speaker Recognition with Reinforcement Learning".Interspeech 2020.ISCA: ISCA:4323–4327.arXiv:2008.03127.doi:10.21437/interspeech.2020-2892.S2CID221083446.Archivedfrom the original on May 12, 2021.RetrievedApril 9,2021.
  33. ^Goldman, Sharon (March 20, 2024)."'Attention is All You Need' creators look beyond Transformers for AI at Nvidia GTC: 'The world needs something better'".VentureBeat.Archivedfrom the original on April 5, 2024.RetrievedApril 14,2024.
  34. ^Maxwell, Thomas."Google's patents cover tech in ChatGPT. But fighting rivals in court isn't worth it, legal experts say".Business Insider.Archivedfrom the original on January 24, 2024.RetrievedApril 14,2024.
  35. ^Zhavoronkov, Alex (January 23, 2023)."Can Google Challenge OpenAI With Self-Attention Patents?".Forbes.Archivedfrom the original on March 28, 2023.RetrievedApril 14,2024.
  36. ^Vincent, James (May 24, 2022)."All these images were generated by Google's latest text-to-image AI".The Verge.Vox Media.Archivedfrom the original on February 15, 2023.RetrievedMay 28,2022.
  37. ^Khan, Imad."Google's Parti Generator Relies on 20 Billion Inputs to Create Photorealistic Images".CNET.Archivedfrom the original on June 18, 2023.RetrievedJune 23,2022.
  38. ^Edwards, Benj (October 5, 2022)."Google's newest AI generator creates HD video from text prompts".Ars Technica.Archivedfrom the original on February 7, 2023.RetrievedDecember 28,2022.
  39. ^"Imagen 3".Google DeepMind.October 30, 2024.RetrievedNovember 16,2024.
  40. ^"How Google Retooled Android With Help From Your Brain".Wired.ISSN1059-1028.Archivedfrom the original on July 27, 2021.RetrievedApril 8,2021.
  41. ^"Google Open-Sources The Machine Learning Tech Behind Google Photos Search, Smart Reply And More".TechCrunch.November 9, 2015.Archivedfrom the original on May 12, 2021.RetrievedApril 8,2021.
  42. ^"This Is Google's Plan to Save YouTube".Time.May 18, 2015.Archivedfrom the original on July 30, 2021.RetrievedMay 18,2015.
  43. ^abLevy, Steven(April 25, 2013)."How Ray Kurzweil Will Help Google Make the Ultimate AI Brain".Wired.Archivedfrom the original on February 10, 2014.RetrievedFebruary 11,2014.
  44. ^Wohlsen, Marcus (January 27, 2014)."Google's Grand Plan to Make Your Brain Irrelevant".Wired.Archivedfrom the original on February 14, 2014.RetrievedFebruary 11,2014.
  45. ^Hernandez, Daniela (May 7, 2013)."The Man Behind the Google Brain: Andrew Ng and the Quest for the New AI".Wired.Archivedfrom the original on February 8, 2014.RetrievedFebruary 11,2014.
  46. ^ab"Ray Kurzweil and the Brains Behind the Google Brain".Big Think.December 8, 2013.Archivedfrom the original on March 27, 2014.RetrievedFebruary 11,2014.
  47. ^abc"We read the paper that forced Timnit Gebru out of Google. Here's what it says".MIT Technology Review.Archivedfrom the original on October 6, 2021.RetrievedApril 8,2021.
  48. ^Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; Shmitchell, Shmargaret (March 3, 2021). "On the Dangers of Stochastic Parrots".Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency.Virtual Event Canada: ACM. pp.610–623.doi:10.1145/3442188.3445922.ISBN978-1-4503-8309-7.
  49. ^abcSchiffer, Zoe (February 19, 2021)."Google fires second AI ethics researcher following internal investigation".The Verge.Archivedfrom the original on September 29, 2021.RetrievedApril 8,2021.
  50. ^Change, Google Walkout For Real (December 15, 2020)."Standing with Dr. Timnit Gebru — #ISupportTimnit #BelieveBlackWomen".Medium.Archivedfrom the original on October 7, 2021.RetrievedApril 8,2021.{{cite web}}:|first=has generic name (help)
  51. ^Dave, Jeffrey Dastin, Paresh (February 4, 2021)."Two Google engineers resign over firing of AI ethics researcher Timnit Gebru".Reuters.Archivedfrom the original on May 5, 2021.RetrievedApril 8,2021.{{cite news}}:CS1 maint: multiple names: authors list (link)
  52. ^Wakabayashi, Daisuke; Metz, Cade (May 2, 2022)."Another Firing Among Google's A.I. Brain Trust, and More Discord".The New York Times.ISSN0362-4331.Archivedfrom the original on June 12, 2022.RetrievedJune 12,2022.
  53. ^Simonite, Tom."Tension Inside Google Over a Fired AI Researcher's Conduct".Wired.ISSN1059-1028.RetrievedJune 12,2022.
  54. ^Azalia Mirhoseini, Anna Goldie, Mustafa Yazgan (2021). "A Graph Placement Methodology for Fast Chip Design".Nature.594(7862):207–212.arXiv:2004.10746.Bibcode:2021Natur.594..207M.doi:10.1038/s41586-021-03544-w.PMID34108699.{{cite journal}}:CS1 maint: multiple names: authors list (link)
  55. ^Cheng, Chung-Kuan, Andrew B. Kahng, Sayak Kundu, Yucheng Wang, and Zhiang Wang (March 2023). "Assessment of Reinforcement Learning for Macro Placement".Proceedings of the 2023 International Symposium on Physical Design.pp.158–166.arXiv:2302.11014.doi:10.1145/3569052.3578926.ISBN978-1-4503-9978-4.{{cite book}}:CS1 maint: multiple names: authors list (link)
  56. ^Igor L. Markov (2023). "The False Dawn: Reevaluating Google's Reinforcement Learning for Chip Macro Placement".arXiv:2306.09633[cs.LG].
  57. ^Agam Shah (October 3, 2023)."Google's Controversial AI Chip Paper Under Scrutiny Again".Archivedfrom the original on December 4, 2023.RetrievedJanuary 14,2024.
  58. ^Kahng, Andrew B. (2021). "RETRACTED ARTICLE: AI system outperforms humans in designing floorplans for microchips".Nature.594(7862):183–185.Bibcode:2021Natur.594..183K.doi:10.1038/d41586-021-01515-9.PMID34108693.
  59. ^"Nature flags doubts over Google AI study, pulls commentary".Retraction Watch. September 26, 2023.