AI-Generated Review Summaries: A Game-Changer or a Gimmick for Amazon Shoppers?

By: Justine Pearson*

PDF Available

Would you be surprised if I told you that this article title was created by an artificial intelligence program called Perplexity? If you have ever utilized search recommendations on Google, edited a paper with Grammarly, or had a conversation with ChatGPT, artificial intelligence (AI) has already impacted your life. However, AI is more than just search bar science and odd renderings. As AI becomes more complex, several industries have begun employing AI to improve efficiency and customer experience. One company that has shown a prolonged and serious interest in the vast applications of AI is Amazon. 

Amazon, the world’s largest online retailer, has begun using AI for an increasingly wide range of applications. Most recently, Amazon announced that it would be launching a new generative AI (GenAI) program.[1] This program will provide users with a summarization of all of the reviews for a single product on their site.[2]Amazon hopes this will allow users to find the products that are best suited to their needs in a shorter amount of time.[3] While this may initially seem to be useful and efficient, there are a couple of key concerns that have come to light in recent weeks.[4]

First, the thoughtful consumer should question how Amazon’s new tool will filter out the thousands of fake reviews that are littered throughout the online platform. Second, how will Amazon ensure that its program is giving due weight to both positive and negative reviews? Before we begin a deeper inquiry into these issues, a basic understanding of how GenAI operates will be helpful.

Generative AI is a type of AI that creates new output data that is inspired by the input data on which it is trained. A concise formal definition of generative AI provided by Google Cloud Artificial Intelligence Technical Curriculum Developer, Dr. Gwendoly Stripling, is “a type of artificial intelligence that creates new content based on what it has learned from existing content… when given a prompt, GenAI uses the model to predict what an expected response might be, and this generates new content. Essentially it learns the underlying structure of the data and can then generate new samples that are similar to the data it was trained on.”[5] Generative AI can be used to translate, summarize, and much more.[6] In the present case, Amazon will be using GenAI to input consumer reviews (the unstructured data set) and summarize those reviews into a short synopsis (the output) that other users can then access on the product details page. 

Though a short synopsis such as this may save users valuable time that they would otherwise need to spend scanning hundreds of user reviews, one must wonder how Amazon will be able to confidently weed out all of the fake reviews that have plagued its platform for years. All the way back in 2015, Amazon filed its first lawsuit addressing fake reviews against operator Jay Gentile.[7] In 2021, Amazon called upon social media companies to help them cut down on fake review companies.[8] In 2022, Amazon began filing complaints against fake review companies such as AppSally and Rebatest.[9] Most recently, on July 31, 2023, Amazon filed a complaint against the operators of a site known as “PMNLWeb.”[10]

In Amazon’s complaint against PMNLWeb, the company explains how sites such as PMNLWeb operate nefarious schemes, how these fake reviews negatively impact its operations, and most importantly, how these schemes may negatively impact its customers.[11] Amazon’s complaint consists of three claims that refer to violations of the Washington Consumer Protection Act and Washington common law.[12]

Amazon’s first claim cites a violation of the Washington Consumer Protection Act, RCW Ch. 19.86.020 which prohibits “unfair methods of competition and unfair or deceptive acts or practices in the conduct of any trade or commerce…”[13] The second claim for relief cites intentional interference with contractual relations. This claim arises because every seller on Amazon is subject to the Amazon Services Business Solutions Agreement which prohibits the employment of fake and paid reviews.[14] Amazon finally claims that PMNLWeb was unjustly enriched by their wrongful conduct.[15] From these claims, Amazon is seeking injunctive relief against the operators of PMNLWeb, disgorgement of profits, as well as the delivery of a “full and complete accounting of all amounts obtained as a result of Defendants’ illegal activities . . . .”[16]

Even if Amazon prevails in this most recent lawsuit, how can consumers be sure that Amazon’s new GenAI will be able to sift through the remaining faulty reviews?[17] If Amazon’s current machine-learning models (AI models) are already struggling against the hundreds of millions of suspected fake reviews, how will the new GenAI join this same battle while also delivering supposedly trustworthy information to consumers?[18]

If a generative AI model is given faulty information, it may produce hallucinations.[19] Hallucinations are “words or phrases that are generated by the model that are often nonsensical… these can be a problem as they can make the output text difficult to understand . . . [and] make the model more likely to generate incorrect or misleading info.”[20] With so much faulty information in Amazon’s interview database, it is difficult to imagine how even the most sophisticated artificial intelligence program would be able to produce trustworthy information. 

Just as concerning then is the question of how Amazon’s GenAI will properly balance the importance of negative reviews when there are so many fraudulent positive reviews for which bad actors have paid. The United Kingdom published a report on fake online reviews which found that on “e-commerce platforms widely used by UK consumers . . . 11% to 15% of all reviews for three common product categories (consumer electronics, home and kitchen, sports and outdoors) are fake.”[21] Thus, if there are 10,000 positive reviews and 1,100 (11%) of those reviews are fake, how will the AI be able to properly weigh the 2,500 negative or neutral reviews that were written by actual consumers? 

Until Amazon reveals more information on how they intend to run this program, consumers can only come to a few unsatisfying conclusions about these issues: 1) fake reviews will continue to be created so long as bad actors are willing to solicit them; 2) Amazon will need to find a way to ensure that these reviews do not taint their new GenAI’s learning dataset; 3) Amazon will need to find a way to properly balance the importance of positive and negative reviews; and 4) it may always be worth it to do a quick scroll through all of those one-star reviews.

* J.D. Candidate, Class of 2025, Sandra Day O’Connor College of Law at Arizona State University.

[1] Haleluya Hadero, Amazon Is Rolling Out a Generative AI Feature that Summarizes Product Reviews, AP News (Aug. 14, 2023, 4:42 AM),

[2] Id.

[3] Id.

[4] CBS News, AI Comes to Amazon Product Reviews, YouTube (Aug. 16, 2023),

[5] Google Cloud Tech, Introduction to Generative AI, YouTube (May 8, 2023),

[6] Id.

[7] Todd Bishop, Amazon Files First-Ever Suit over Fake Product Reviews, Alleging Sites Sold Fraudulent Praise (Apr. 8, 2015, 5:31 PM)

[8] Annie Palmer, Amazon Asks Social Media Companies to Help It Root Out Fake Reviews, CNBC (June 16, 2021, 5:26 PM),

[9] Annie Palmer, Amazon Sues Two Companies that Allegedly Help Fill the Site with Fake Reviews, CNBC (Feb. 22, 2022, 2:10 PM),

[10] Greg Lamm, Amazon Targets Website in Latest Fight over Fake Reviews, Law360 (Aug. 2, 2023, 8:17 PM)

[11] Complaint at 6,, Inc. v. Does 1-5, d/b/a, No. 23-2-14063-0 (Wash. Super. Ct. 2023).

[12] Id.

[13] Wash. Rev. Code Ann. § 19.86.020 (1961).

[14] Id.

[15] Id.

[16] Id.

[17] CBS News, supra note 4.

[18] Id.

[19] Google Cloud Tech, supra note 5.

[20] Id.

[21] Alma Economics, Fake Online Reviews Research 9 (2023).