Online Public Hearings - Comment Spambot Magnet or Best Behaved Citizens?
One of the concerns that new cities and counties have when considering adopting hybrid public hearings using the People Speak platform is comment spam – if they launch an online website with web-based forms where the public can submit comments on items, will it be abused by Internet users? Will polarizing views result in misbehavior, profanity, or unsolicited advertisements? Or worse, will Internet spambots bring a storm of automated comment spam?
We recently reviewed the comment data across all of the sites using the People Speak platform and looked at how many comments are rejected and why. Here's what we found out:
In over half a decade of operation, no city or county has ever received a bot comment on their site.
In that time, of the thousands of web-based comments received, only 1.48% of all comments have been rejected.
Of the rejected comments, the majority of them (64.7%) were incomplete submissions or did not follow the published comment policy.
The remaining comments that were rejected were unrelated (such as a service request, commenting in the wrong place or topic) or accidental duplicate submissions.
Less than 1% of comments were rejected for being non-compliant with the comment policy
Looking deeper into the incomplete or not following comment policy comment rejections, which represent 0.96% of all comments, the specific reasons include comments in these five categories:
the commenter failed to provide the required contact information such as name and email required by the city,
the comment was directed at prior commenters or the public or the staff (rather than the Planning Commission or City Council, for example), which was explicitly prohibited according to the city or county comment policy,
the comment form was used incorrectly to submit a private question to staff,
decision makers (such as Commissioners, Counselors, or Board Members) submitted the comment but are not allowed to leave public comments, according to the comment policy,
the comment language was inflammatory, used threatening or abusive language, used profanity, or used language or concepts deemed to be offensive or demonstrably false, prohibited according to the comment policy.
What’s missing from these categories? All of the rejected comments were comments from real residents versus computer-generated or automated comments.
What about spambot comments?
In all of the years running online public hearings, no city or county has yet to receive a single bot comment.
How People Speak drives high quality comments
Combatting comment spam isn’t a trivial matter and we’re proud about the low number of comment rejections and years of no trouble with spambots. Here’s what we do to ensure high quality comments:
Use CAPTCHA to prevent automated comment spamming
Disallow anonymous posting by default to add friction to discourage bad actors
Hold all asynchronous comments for moderation to ensure a human review process
No support for hyperlinks to make comments on the platform less of a target
No support for threaded comments to reduce flaming, trolling, or other personal attacks
Publish clear and easy to find comment moderation policy, authored by the city or county, highlighted at various touchpoints to ensure residents are aware of the policy when they submit comments
Decision-making criteria feature that allows staff to post the criteria by which each item will be evaluated, in order to help residents understand how to ensure their comments are relevant
Additional resources
Wondering what a comment policy looks like? Checkout the comment policy for Lakewood Speaks and Wheat Ridge Speaks.
Want to learn more about our comment moderation tools? Contact us for a demo.