General Discussion
Related: Editorials & Other Articles, Issue Forums, Alliance Forums, Region ForumsGuardian: 'Sending threatening posts among offences in revised online safety bill
Proposed laws require tech firms to prevent publication of harmful content or face substantial fines'
https://www.theguardian.com/society/2022/feb/04/sending-threatening-posts-among-offences-in-revised-online-safety-bill
snip
'Tech firms will also be required to prevent users from being exposed to content such as revenge porn, fraud and the sale of illegal drugs, or face the threat of substantial fines under the proposed changes. Previously, platforms such as Facebook and Twitter had to take such content down if it was flagged to them but now they would be legally required to prevent users from being exposed to them in the first place.
The culture secretary, Nadine Dorries, said: We are listening to MPs, charities and campaigners who have wanted us to strengthen the legislation, and todays changes mean we will be able to bring the full weight of the law against those who use the internet as a weapon to ruin peoples lives and do so quicker and more effectively.
The online safety bill is expected to be introduced to parliament over the next few months and is designed to protect users from harmful content. Under the changes brought forward by Dorries, the legislation will introduce three new online communications offences for individuals proposed by the Law Commission, an independent body that reviews laws in England and Wales.
Those offences are: sending or posting a message that conveys a threat of serious harm, sending a communication with the intent of causing psychological harm or serious emotional distress, and deliberately sending a false message with the intention of causing harm.'
There's more text at the link. My first impression is to approve of this effort.
Chainfire
(17,538 posts)Bernardo de La Paz
(49,001 posts)Chainfire
(17,538 posts)However, if that type of law was passed here, I can see me getting a visit from the Republican Gestapo for calling Trump a dumb SOB. Do you want that kind of slippery slope? All things that glitter are not gold.
Bernardo de La Paz
(49,001 posts)It will not cause psychological harm in 99.9999% of humans, most especially not the unempathic tRump.
Get real.
Chainfire
(17,538 posts)Censorship, even for a good cause, can cause more evil than good. Of course, if it was applied in a fair and just matter, it could be a v good thing. We do not necessarily live in a fair and just world. Do not give your enemies a rope to hang you with.
OldBaldy1701E
(5,128 posts)Bernardo de La Paz
(49,001 posts)... don't blow up a good thing.
The perfect is the enemy of the good. I'm against censorship. But everything has limits, including freedoms. Deterring people from crying "Fire" in a crowded theatre is not censorship. This is the same kind of thing.
By not combatting threats and harmful activities you are giving them the rope to hurt people.
Just because something is difficult does NOT mean we should give up and do nothing.
Threats are out of hand and good people are being frightened into quitting hospitals and teaching positions and school boards. It has to be stopped.
Chainfire
(17,538 posts)Bernardo de La Paz
(49,001 posts)... examples that fail in your state.
There are many.
paleotn
(17,913 posts)But I do get your point. Threats of physical harm I can understand. In some US states, that's already a criminal offense,...communicating threats. Psychological harm and emotional distress? That's rather subjective. Acres of gray area.
3Hotdogs
(12,376 posts)A new cure for something comes out. It is publicized but considered by the "health establishment" of that particular disease to be fraud. The scientist or company gets sued. But the cure is later found to be effective.
This type of law could hinder research and publication of findings.
Another example: January, 2020. C.D.C.: "Masks don't do no good."
..........................March, 2020: "Everybody gotta wear a mask."
Yeah, don't get excited. I know the C.D.C. change resulted from new understanding of how the virus transmits.
muriel_volestrangler
(101,316 posts)To prevent anyone ever seeing the actual threats will mean everything posted on social media will have to go through tough filters. Stuff like "I could kill for a burger right now" will have to be blocked, pending review by a human being (and the number of those needed to allow anything like a two-way conversation to happen will be astronomical). AI cannot recognise sarcasm. I think the British government will be persuaded to rethink this.
Bernardo de La Paz
(49,001 posts)Last edited Sat Feb 5, 2022, 11:15 AM - Edit history (1)
... a defense when threatening people.
AIs can and will be trained to recognize sarcasm. "I could kill for a {item of food, hot bath, desirable gadget, etc.}" is a recognizable pattern. AIs might flag stuff and stuff might even get hidden during an appeal process, but ...
THREATS MUST STOP.
They are completely out of hand.
muriel_volestrangler
(101,316 posts)It's hard enough for people to recognise sarcasm online from a person they don't know well; AI can't do it now, and you can't write a law assuming a major step forward in tech is about to come. "I'll kill Gary - he forgot to do the shopping on the way back from work!". This is how people talk, without meaning they'd do anything at all.
Yes, "threats must stop", but so must violence, for instance. That doesn't mean that you prosecute a property owner when violence happens on their property, for not preventing it.
I don't understand what you mean by "It will not be enforced by AI" - that seems to contradict the rest of your post.
Bernardo de La Paz
(49,001 posts)Yes, that will require AI.
People really should think about what they say. "I'll kill Gary - he forgot to do the shopping on the way back from work!" is no longer appropriate.
And yes, you must write law in expectation of some capability. Just don't vote it or activate it until the facts (reality) present themselves. However, of course, legislators are busy people and don't have time for such exercises, but academics can write such laws in advance for discussion. You can be sure that Large Companies are writing such laws though unlikely in a form you approve. So some preparation on the part of politicians, media, unions, and public are very advisable.
As to "enforced", a flurry of AI hides against one person would not be enforcement, since there would be appeals. Any actual enforcements (suspensions, dismissals, permanent hides) would be done by humans, at least until some of the judicial apparatus goes AI at low levels. Which will happen (think parking ticket disputes). But I did not explain myself. Sorry.
Finally, the perfect is the enemy of the good. An AI that can place 95% of real threats or disinfo on initial hide status could be expected to let about 95% of satire pass too (would require context, of course, but that is very doable). Best not to let concerns that 5% of satire would be initially hidden be part of a quest for perfection that prevents the same AIs from being used to block 95% of harmful stuff.