Google AI has Created its own Artificial Intelligence Child
Table of Contents
Introduction
Google AI has developed an Artificial Intelligence system that has created its own “fully-functional AI child”. According to the latest AI news, this AI child is capable of out-performing its own human-made equivalents.
This computer-made system called the NASNet is designed for recognizing objects such as people and vehicles in videos and photographs. Currently NASNet can identify objects in an image with an accuracy of 82.7%. Google stated that this is an improvement of 1.2% more as compared to other AI programs created by humans.

Google, the search engine giant has made the system “open source” which allows programmers from other companies to expand upon the program or customize and develop their own version. This opportunity has been presented by Google to AI developers with the hope that they can build models which are out of the box and innovative.
Alphr has said that while the AI program appears to be harmless in its current state, it could prove to be dangerous with significant advances in the technology. The website further stated that the AI systems were capable of developing their own “biases” and spreading them onto other machines.
SEE ALSO: Walt Disney is buying 21st Century Fox Assets Worth $52.4 billion!
However the Daily Express said that many tech giants such as Facebook and Apple have formed a partnership on AI in order to benefit the people and the society. Their main focus will be on implementing strategies which will allow AI to be developed for the benefit for humanity.
In order to test NASNet, Google applied it to ImageNet image classification and COCO object detection dataset, which it describes as “two of the most respected large scale academic datasets in computer vision”.
NASNet achieved an accuracy of 82.7% on ImageNet and 43.1% map on COCO which is 1.2% and 4% respectively better than all the previously published results.
Conclusion
Google’s NASNet marks a significant breakthrough in AI development, showcasing the potential of self-improving machine learning systems. While it outperforms human-created equivalents in object recognition tasks, the open-source nature of NASNet presents an opportunity for developers to push the boundaries of AI further. However, concerns about the risks of bias and potential dangers of AI development continue to exist. As technology advances, ethical considerations will need to be carefully addressed to ensure that AI remains a force for good.
FAQs
- What is NASNet?
- NASNet is an AI system developed by Google to recognize objects like people and vehicles in images and videos. It outperforms human-made equivalents with an accuracy of 82.7%.
- What is the purpose of making NASNet open source?
- Google has made NASNet open source to allow developers to expand upon, customize, and create innovative AI models that can contribute to the field of artificial intelligence.
- How does NASNet compare to other AI systems?
- NASNet has shown improvements over previous AI models, achieving 1.2% higher accuracy on ImageNet and 4% higher on COCO, making it one of the most advanced object recognition systems available.
- Are there any risks associated with NASNet and AI development?
- While NASNet itself appears harmless, experts warn that advanced AI systems could develop biases that may be spread across different machines. Ethical considerations are crucial in ensuring AI development benefits humanity.
- How did Google test NASNet?
- Google tested NASNet on two respected datasets in computer vision, ImageNet and COCO, where it achieved impressive results in object classification and detection.
Keep reading iTMunch for more AI related news.





