Federal agencies have struggled to ensure that the comments they receive from the public on proposed rules not only come from real people, but also that the agency isn’t being spammed with the same comments over and over again.
According to experts that spoke at the General Services Administration’s Public Meeting on Mass and Fake Comments, the challenge that agencies will face in combating comments submitted by bots or people using someone else’s identity lies in making sure that interested citizens don’t have too high or a barrier to prove themselves legitimate for submitting comments.
“We want to be very careful to not amend the tool in such a way that cracks down on something that’s completely normal, permissible, constitutionally valid and a positive way to engage with government. There are some problems in there. There are abusive comments, certain questions about what agencies need to respond to. But I think to me those seem to be surmountable,” said Patrick Hedren, vice president of labor, legal and regulatory policy at the National Association of Manufacturers.
According to Michael Fitzpatrick, head of global regulatory affairs at Google, there are tools that eRulemaking sites like Regulations.gov can use to ensure that the person submitting a comment on a proposed rule is actually a human being, and not a bot using artificial intelligence, such as a captcha that requires users to identify pictures or letters.
But that method can also have its downsides.
“Imagine if every time a comment was submitted, that method of protection adds a level of friction to the process. And this is the great balance for agencies and for the rulemaking process: they want to protect — we want to protect — against bad actors, but they don’t want to add a level of friction that deters democratic participation,” said Fitzpatrick.
Agencies could also have the option of using tools that track a visitor’s behavior to provide an evaluation of whether that person is behaving like a person or a bot. That evaluation can then allow the agency to determine which commenter to apply more scrutiny to, and which to allow submission without any further barriers.
Addressing mass comments could prove more challenging, as organizations frequently employ legitimate mass comment campaigns to get many interested citizens to more easily provide their views on the issue.
“Are we sure that we have communicated with the public on what this is? Does the public understand what happens? It’s not a vote; it’s an engagement with an agency,” said Hedren.
According to Reeve Bull, research director for the Administrative Conference of the United States, agencies are not legally required to take the public’s opinion into consideration when making a new regulation. Rather, they must consider all facts and data presented in public comments before issuing a rule.
Under that requirement, a large number of identical comments that merely express support or opposition to a rule would have little impact, but a mass comment campaign that provides extensive facts for the agency to consider would be.
If the public has a misunderstanding that a comment equals a vote rather than providing the agency with information, then it is more likely that bad actors will spam a proposed rule with fake opinion comments that actually won’t have much impact.
“We don’t see mass comment campaigns as an abuse of the rulemaking process,” said Steven J. Balla, associate professor of political science, public policy and public administration, and international affairs, and a senior scholar at the George Washington University Regulatory Studies Center. He added that mass commenting does not fully “democratize” the process or grind agency regulatory programs to a halt as some predicted.
Though agencies are not required to use it, GSA has produced a beta version of regulations.gov that aims to promote more engagement with federal rulemaking by updating the site’s user interface, commenting process and behind-the-scenes architecture.
Jessie Bur covers federal IT and management.