When I was doing my final year project in 1989 at IIT Roorkee, India in a subfield of AI known as neural networks – it was “out of this world.” The grand challenges of the field were a limited memory, slow processing, and lack of training testbeds. Starting around 2010, the field of AI has been re-energized by fast hardware Graphics Processor Units (GPUs), ASICs, ML Accelerators and infinitely large training testbeds (BIG data such as images, IOT, social networks, etc.). This combination has resulted in AI to advance at a breakneck pace.
|