Over the past few years, the majority of AGI progress has been aligned towards building smart agents that could outperform humans. This progress has been simulated on by creating various gameplays – IBM’s Chess Player, Deep Mind’s Alpha Go and the new AlphaStar. The key focus for most of the leading research that has come out of the top labs is focused on building AGI or Artificial General Intelligence.
But smart agents are only one aspect of AGI thus it might be misleading to call it AGI. According to me, one of the most important aspect to identify true AGI would be its collaborative aspect with the human entities. I think this sidetracking can be partially credited to Turing Test or the Imitation Test which has defined an agent to be AGI if it can fool a human being. Fooling a human isn’t likely to be achievable by simply outperforming the human being but rather by collaborating with a human goal towards a goal which could be to outperform the human entity involved in the collaboration.
This could also be equated to building a greedy system with apparent goals and hidden goals where the intelligent agent is trying to optimize to balance their apparent goals while focusing on the hidden goals sub-consciously.