AI: What the emerging regulatory environment means for hospitality companies

Generative AI (GenAI) tools are rapidly gaining traction in virtually every sector of the global economy, including the hospitality industry. From the reservations and booking process to check-in and check-out to increasing customer retention and improving sales, there are virtually endless use cases for machine learning and GenAI. As the regulatory regime for these technologies continues to evolve, there are some key considerations that hospitality companies should keep in mind as they look to build and leverage these powerful tools to interact with guests and employees and improve the bottom line.

Use cases for GenAI by hospitality companies

Intelligent Virtual Assistants

Most hospitality company websites offer a virtual assistant to help guests navigate the booking process. The chatbots used to deliver these services are powered by machine learning and GenAI. Virtual assistants create a more efficient customer service experience by decreasing response times for customer support, as well as lowering costs for companies that no longer need to staff large call centers.  

Predicting Guest Preferences and Upselling
GenAI can be used by hospitality companies to go through large amounts of data to understand guest preferences and purchase behavior. GenAI tools can also help create increasingly personalized marketing campaigns based on preferences (known and inferred), which can help increase revenue through new product or service purchases, and increase customer retention based on better engagement.

Further, pricing models use GenAI to assess data such as type of guest (loyalty level, or not a loyalty member), demand and available inventory to make dynamic pricing even more granular for new guests, repeat guests and loyalty program members. GenAI can also enable personalized recommendations and upsells at check-in.

Corporate Operational Tools
GenAI tools are already being used to handle back-office operations such staffing, employee communications and even HR tasks such as resume review more efficiently.  These applications are replacing mundane and repetitive human tasks because they can be carried out more quickly and efficiently by GenAI tools.  

The developing AI regulatory regime

United States
The United States does not currently have a general federal AI regulatory regime. There are a variety of standards and voluntary guidelines that have been promulgated by various executive branch agencies and NGOs, some regulation (and more coming) of these technologies within individual states, and emerging enforcement, including at the FTC and the EEOC.  

At the federal level, the Trump and Biden administrations published Executive Orders creating a foundation for the development and use of these technologies by the federal government. The National Institute of Standards and Technology has published an “AI Risk Management Framework” intended to guide organizations in the trustworthy design, development, use and evaluation of AI products. The White House published a “Blueprint for an AI Bill of Rights” in October, detailing key areas of concern, including ensuring systems are safe and effective, preventing algorithmic discrimination, protecting consumer privacy and data security, ensuring that systems include notice and explanation, and ensuring that there are human alternatives, consideration and fallback.

The FTC has also entered the AI space in a significant way. Through various blog posts, a joint agency enforcement commitment and its recent investigation into OpenAI, the FTC is staking a claim to being the lead general federal AI enforcement authority through Section 5 of the FTC Act. The FTC has shown it is concerned with company transparency and notice to consumers about the use of AI, companies’ claims about their AI products and services, the potential for bias in the use of AI, ensuring company decisions based on AI are fair and not misleading, ensuring that AI systems protect user privacy (including in connection with wide-scale internet data scraping) and do not endanger data security, ensuring GenAI systems are safe and that IP ownership and rights of users are properly represented, and ensuring that AI systems are accurate, especially with respect to outputs regarding identifiable individuals.

Some U.S. states with comprehensive privacy laws are considering how to address AI and automated decision-making technologies in their state laws and implementing regulations. For example, California’s Privacy Protection Agency recently solicited public comment on how to craft regulations surrounding automated decision-making, and the state privacy law (the California Consumer Privacy Act) already includes language around data minimization and dark patterns, privacy issues that can be raised when large data sets are used to create generative AI tools.

On the international stage, the EU AI Act takes a risk-based approach to regulation, which would be overseen by a proposed AI Board. The goal of the EU framework is to establish a regime that is trustworthy, can include ethical standards, support employment and influence global standards. Notably, companies would be obligated to label AI generated content to prevent spreading false information. Businesses would also be required to publish a summary of what copyrighted materials are being used to train their tools, to combat wide-scale data scraping. EU regulators have spent at least two years seriously thinking about these issues and are well ahead of their U.S. counterparts.  

Considerations for hospitality companies using AI

As noted above, both internal and guest-facing use cases are emerging for GenAI in the hospitality industry, with everything from internal HR use to external applications in check-in and check-out, customer service, sales, personalized communications and marketing messages.  

Avoid inheriting developers’ legal issues: privacy, IP and algorithmic discrimination

There has already been significant litigation against AI developers, including for alleged privacy and intellectual property violations. These cases target the developers of GenAI. From the perspective of hospitality companies, the concern is that this wave of litigation may begin to target not just developers, but also licensees of this technology.

To that end, licensees of AI technology should conduct due diligence on GenAI tools before licensing them. How were they trained? On what data? Were they trained on IP-protected materials? On personal information scraped from the internet? How reliable are they? Do they make biased decisions or yield biased outputs? One way to determine this is to carefully review the platform’s system cards. Be sure to opt out of prompt data being submitted to the AI platform, and, as a rule, don’t enter personal information, trade secrets, source code, or information subject to intellectual property protections into the AI prompts. In your user interfaces, warn users to follow these rules as well.  

To the extent possible, try to get indemnities for these issues from the licensor. In the current environment, where there is a concentration of offerings, this may not be possible for the most widely used tools. But the market is expanding rapidly, and as competition among developers increases, their willingness to agree to appropriate contract terms will increase.

Accuracy and reliability
GenAI tools are smart, but they are still a work in progress. They produce outputs with inaccuracies and cause unintended discriminatory outcomes. This is why it is important to test the technology before deploying it. Are the results accurate and reliable? If a GenAI customer service tool leads to guest frustration, the cost savings may not be worth it until accuracy is improved. If HR uses of AI result in a discriminatory effect, the EEOC and the Department of Justice have made clear that they may take action. In the case of the EEOC, that has already happened.

Review your AI-generated user interfaces

Gen AI is terrific for solving the “blank page” problem that user interface creators sometimes face. But it can also create problems that, even if unintended, can result in litigation and regulatory scrutiny, especially if not reviewed by humans. A new body of law on “dark patterns”—user interfaces that cause users to make certain choices they may not want or intend to make—has emerged at the FTC, through enforcement actions, and in state comprehensive privacy laws in recent years, and the concept is gaining traction internationally.   Through a comprehensive report and in enforcement actions (including one leading to monetary relief of close to half a billion dollars), the FTC has gone out of its way to make clear that it is policing this new theory. And it’s not difficult to imagine how hospitality companies could get wrapped up in it when using GenAI for creating user interfaces without human review.

Consider an AI-generated user interface created by a hospitality company for booking. Does it hide fees, or fail to disclose them adequately? Does it offer an upsell with a confusing way to refuse it? Does it ask guests to subscribe to a newsletter and make the “no” option more prominent than the “yes” option? Does it make it difficult to cancel a loyalty program membership? All of these could be dark patterns. This is why human review is critical when using GenAI to create user interface experiences.

GenAI offers the prospect of greater efficiency in use cases stemming from booking to check-in to customer service, sales and internal applications. There is no question that companies that embrace this new technology will gain an advantage in the market. However, as with any new technology, it is important to be thoughtful when rolling it out. The emerging regulatory environment prioritizes accuracy, safety, privacy and data security, intellectual property and avoiding bias. Of course, as this area matures, there will certainly be other issues of heightened scrutiny. But for now, due diligence, contract terms, deploying in appropriate use cases and having human oversight will get you a long way towards the goals of improved efficiency, increased revenue and decreased risk.

D. Reed Freeman Jr. is a partner, and Andrea Gumushian is an associate in the Washington, D.C. office of ArentFox Schiff LLP.