Latest Blog Post
Article

AI Changes how Parliaments must Publish Information

February 11, 2026

LegisPro Logo

By Grant Vergottini, CEO of Xcential

Last week, I mentioned I had attended a conference on AI legislative conference in Brasilia, Brazil. One aspect that got a considerable amount of attention is how Parliaments publish information. AI fundamentally changes how information is consumed, and you ignore these changes at your own peril.

In the past, the consumer was human. The way the information was presented was to attract the attention of the human – with a focus on presentation style, search engine optimization, and, for better or worse, to how that information was monetized.

With AI, the consumer becomes the computer – or generative AI engines. Much has already been written about how this change is already changing web traffic patterns and reducing traffic to web sites that rely on traffic to generate revenue. For Parliaments, the issue is not a declining revenue stream so much as it ensuring that the public have access to accurate information.

Indeed, many of the strategies employed to attract the human visitor by appealing to or relying on their critical thinking skills to seek out the most authoritative looking source of information might well backfire when attracting AI-based traffic.

AI engines work by consuming vast amounts of information which they then use to formulate intelligent looking answers to search engine prompts. These answers can seem so well put together and authoritative that few people will delve any further to verify the accuracy of the information. That is a huge problem.

There is no guarantee that an AI engine will seek out authoritative sources of information as they learn. These algorithms tend to seek out information that is most accessible to them regardless of its source.

So, what might make information inaccessible to an AI engine:

  • Publishing documents in Word or PDF. Even if the AI engine can decipher the format, the paper-oriented layout is likely to confuse how it is interpreted.
  • Publishing structured information in a Zip file, taking highly useful information and hiding it inside an opaque file.
  • Requiring a form to be filled out to retrieve the information.
  • Requiring a login or some sort of subscription to the website
  • Requiring the consumer to prove they are human

That last issue is an important issue to consider. We’ve all seen the prompts that challenge you to prove you are human. Their intent is to keep AI engines away and, on the surface, that is well-intentioned. But it creates a new problem by providing an opportunity for some other enterprising entity to provide this information to the AI engine – and possibly in a way that intentionally spreads misinformation.

Parliaments need to think long and hard about how they publish information in the era of AI. They must ensure that they are publishing information in the best possible form to ensure a successful outcome with AI tools and to ensure that they don’t inadvertently create opportunities for the malicious spreading of misinformation.

Next time I’m going to explore the internal usage of AI tools and some of the considerations.