This blog post summarizes the book titled “You Look Like a Thing and I Love You”


AI is Everywhere

If you are wondering about the title of the book, then the first chapter throws light on it. The title of the book is one of the cheesy pickup lines that an AI algorithm dishes out. The author Ms.Janelle Shane, diligently went about collecting a set of pickup lines on the Internet and trained an AI to generate pickup lines. For some reason, she felt that one of them was quirky enough, weird enough to warrant the title of this book. The main point the author makes in the preface is that AI is already everywhere and of course far from perfect. The quirkiness of AI are beginning to have consequences far beyond the merely inconvenient.

AI algorithms have become so complex that the only tools that we can use are on its output. By analyzing the weirdness, the idiosyncrasies of the output, we can get a sense of the current capabilities and more importantly, the limitations of AI.

The author does a good job of introducing machine learning to a total novice. She takes the example of generating Knock-Knock jokes by an AI program that needs lots of data and a goal to master. It is immediately clear that any AI program would need those two components and the rest, it figures it out. The power of AI almost appears magical especially for problems where one cannot codify the rules that govern the learning; problems such as image recognition, playing chess etc. The reader quickly is brought to a set of examples that show that not everything is OK. Using simple image captioning examples involving sheep, it is clear that not all rules learnt by an AI program are the right rules. There are biases that creep in any learning task and removing these biases is not always straightforward. Any AI depends on the data and this means that they are meant to replicate what has been done in the past with in a semi-automated manner or automated manner. This does not mean that the task being done is the right thing to do. Any errors/bias/incorrect data is going to make AI learn those too.

The chapter ends with four warning signs of any company building AI tools needs to think about

  1. The problem is too hard: The domain and the rules of the game change so often that is difficult to beat for AI to come with an effective and robust solution
  2. The problem is not what we thought it was: This refers to a situation in which an AI solves the problem efficiently, but ultimately the wrong problem
  3. There are sneaky shortcuts: All AI cares is about learning from the data and achieve the desired goal. Sometimes in pursuit of the desired goal, it takes sneaky shortcuts which do no good, as far as churning out sustainable AI solution is concerned
  4. AI tried to learn from flawed data

AI is everywhere but where is it exactly

Well, AI is already replacing humans in many areas where the task is a high-volume repetitive task. These tasks typically are fine with a buggy performance. Tasks such as span filtering, research lab diagnostics etc. There are also being increasing used to generate independent short articles from sports statistics. In any task if writing down the rules take a lot of time or is near impossible, one can turn to AI and see if any of the algorithms can figure out the rules and give a reasonable performance. Using AI means that there could be false positives but the reduction in false negatives is saving a lot of time that would otherwise be spent in analyzing the false negatives.

What are the aspects that determine whether something can be automated by AI? The author uses the example of training AI to churn out cheese cake recipes vs general recipes to argue that AI is better for tasks that have a narrow and well defined objective. AI was able to generate a reasonable cheese cake recipe but when it came to a general recipe, it failed dramatically. The more general the task is, the more mundane the task is, AI is not suited to do well as there are many aspects that are not defined in the data, which is the only ingredient to the algorithm. Another example that supports this argument is Facebook that launched a chatbot with much fanfare that it could answer any question. It was shut down as the program was not able to handle such a generic problem.

AI researchers are working on “One-shot learning”, an approach where the program can learn with fewer data points.

The main point of the author, that is echoed by other well known experts in this field is that, we are no where near Artificial General Intelligence. All the applications of AI can be termed as ANI - Artificial Narrow Intelligence.

The main problem with training any AI is the need for datasets. Needless to say, the more general the task, the more varied the data one should collect. To circumvent the problem of limited data, AI researchers and practitioners have turned to other approaches such as simulating data, learning from pre-existing model(transfer-learning). One of the limitations of any AI program is the memory. In most of the AI that is available out there, the programs can only remember a few time steps in to the past and hence tasks that inherently need long term memory, composition, abstractive arguments are difficult for an AI to deliver.

Even if one uses fancy AI models such as LSTM, Open AI models, NLP generated text might match the surface qualities of a human speech but most often they lack any deeper meaning

The author concludes the chapter with an example of Self-Driving cars and says that even though these cars have logged in miles that a human will take lifetimes to log, there are no where capable to drive on regular roads. Driving in a city is so generic a problem and requires so much contextual data, that it will remain an unsolved problem.

How does it actually learn

The author does a great job of explaining the basic algos that are used in today’s world of machine learning. Basic Neural Networks are explained by an classification example where the NN has to learn to differentiate delicious sandwiches from the rest. Along the way, the author manages to touch upon the main aspects of a Neural Network, i.e.

  • Hidden layers
  • Activation Functions
  • Dealing with class imbalance
  • Regularization
  • Dangers of over-fitting - author uses the phrase,“temptation of AI to take a shortcut”

Markov chains and Random Forest algorithms are explained with a few easy to grasp examples. Subsequently the author illustrates optimization algos such as hill climbing algorithm, gradient descent algorithm, convex optimization algorithms. The author also delves in to the world of genetic algorithms using an example of making a robot learn to solve a crowd-control problem. There is also a section of GAN’s that were first introduced by Ian Goodfellow to the DNN world. After this whirlwind tour of all algos, the author does not fail to point that real world is dominated by combination of algos that do specific things well, i.e. AI is good for solving narrow problems

It is Trying!

This chapter talks about all the problems that arise in the real world when an AI program gets to work. Image recognition is one field that has seen a rapid adoption of AI DNN based techniques. However there are still idiosyncratic examples that make us become suspicious of the AI performance. One of the popular examples is the “girafffing”, phenomenon attributed to an AI that spots giraffes in a picture where there are none. The main issue is with the training data. Most of the training data involving natural scenery have a giraffe and hence the algo is biased to make false positive errors.

In today’s world, it is data that can be used for training AI, that is super valuable. If there is not enough data, then there are techniques being explored in which the data is being generated by GANs/ crowd sourced. Even though these techniques seem to circumvent the problem of limited data, they introduce many new problems such as - What if the crowd sourced party is using a bot to generate data ? What if GAN’s themselves are fed with inadequate data to generate variants of data ? The real world is characterized by messy data, incomplete data, time wasting data, inadequate data, unintentional memorization. Since AI is essentially working off these datasets, ultimately the performance of the program takes a hit.

The key point from this chapter is that the data needs to be good enough for AI to generate meaningful output. At the same time throwing massive amounts of data is not going to yield better results as one might have to restrict the problem statement, make it narrow, curate the data so that the machine learns the relevant rules for it perform well on the narrow task.

What are you really asking for ?

The author talks about the importance of framing the reward function in any AI design. AIs are prone to solving the wrong problem because

  • They develop their own ways of solving a problem rather than relying on step-by-step instructions from a programmer
  • They lack the contextual knowledge to understand when their solutions are not what humans would have preferred

It is the job of the AI programmer to

  1. Define the job clearly enough to constrain the AI to useful answers
  2. Checking to see whether the AI has, managed to come up with a solution that’s not useful

The author provides a ton of examples of faulty reward functions that made AI do weird stuff. Youtube teams in Google have also not been able to escape the clutches of erroneous reward function designs.

Hacking the Matrix, or AI finds a way

The author talks about creating simulated environment so that AI can learn to codify the unwritten rules. Humans cannot reproduce the actual environment in a simulation as the world will be too detailed and the whole purpose of simulation would be impossible. Given that robots learn in a simulated environment, the author gives a range of examples that show that simulated environment learning is not at all adequate for the robots to do justice in the real world scenarios

Unfortunate shortcuts

The author explores the topic of AI taking shortcuts in achieving its objective. All these examples fall under the topics such as overfitting, class imbalance, algorithmic bias. What can one do to make sure that AI does not take a shortcut ? The author suggests that it is better to first realize that any AI algorithm will contain some sort of bias as it is trained on data that is human generated and will inherently contain bias. Being cognizant of this fact, it is better to make sure that the algorithm is tweaked before it goes to production, rather than discovering after it has been rolled out. The chapter has a nice set of examples that will make anyone cautious about various steps that are a part of building a AI bot. A few examples mentioned in the chapter are

  • Amazon discontinuing its pre-screening application bot
  • Predictive policing going wrong
  • COMPAS algo
  • Google Flu
  • Radiologist using AI that has taken shortcuts to learn

The key message from this topic is - “Oversight is the key to design a good AI bot”

Is an AI brain like a human brain ?

The author reminds one of the important frailties of current AI systems - Catastrophic forgetting, a phenomenon where the AI cannot remember more than a specific time units in the past and transfer learning across tasks is still an utopia. This chapter contains the same set of theme as presented in the previous chapters, but with different set of examples

Human bots( where can you not expect to see AI)

This chapter is more like a quasi-summary of the book where the author reiterates four important questions one must keep in mind while evaluating an AI system

  1. How broad is the problem?
  2. Where did the training data come from?
  3. Does the problem require a lot of memory?
  4. Is it just copying human biases?

A human - AI partnership

By the time one reaches the penultimate chapter, it becomes evident to any reader that AI in its current form is far from what we would expect a bot to do. In author’s words,

Left to its own devices, it will flail ineffectually and at worst it will solve the wrong problem entirely

For any practical machine learning to succeed, it must be a combination of human and machine effort. There is a lot of effort that goes in to framing the right problem, curating the dataset so that is is specific to the task at hand, seeing to it there are less data biases in the dataset and incorporating contextual knowledge in to learning process are all important tasks of a human. Machine in turn will then be effective in doing its part, i.e. coming up with learning rules that cannot be manually specified and meeting the reward function set by the human. Human’s role also becomes important in the maintenance part of the algorithm. World is never static and hence from time to time, the data based on which the machine has been trained becomes stale and the need to constantly retool the algo is again a human effort.

The author cites a set of varied examples to show that pure AI creativity/ Let the algos figure out approach without human intervention has lead to mostly failures


The author upon many aspects of AI and concludes that we are not anywhere close to what we would ideally want from AI, i.e. AGI. Given the current state of AI and its limitations, it is obvious that humans have a great role to play in defining the objective function, using the appropriate datasets, reduce the biases that creep in to AI learning. The hype that we see around saying, Robots will replace Human, is definitely not something that will happen in our lifetime, is conclusive from the many arguments made by the author. Thoroughly enjoyed reading the book.