Problems with Elon Musk’s Twitter fundraising plan

[ad_1]

For example, Melanie Dawes, CEO of Ofcom, which regulates social media in the UK, told social media platforms will need to explain how their code works. And the European Union’s recently enacted Digital Services Law, Agreed on April 23will likewise force platforms to offer more transparency. Democrat senators introduced in the USA Recommendations for Algorithmic Accountability Law Their goal is to bring new transparency and oversight to the algorithms that govern our timelines and news feeds, and more.

Allowing Twitter’s algorithm to be seen by others and adapted by competitors means, in theory, that someone could copy Twitter’s source code and release a rebranded version. Most of the internet runs on open source software—the most famous OpenSSLA security toolkit used by large parts of the web that suffered a major security breach in 2014.

There are even examples of open source social networks already. Mastodon, a microblogging platform built on concerns about Twitter’s dominant position, allows users to review its code; Released on GitHub software repository.

But seeing the code behind an algorithm doesn’t necessarily tell you how it works, and it certainly doesn’t give the average person much insight into the business structures and processes that went into its creation.

“It’s like trying to understand ancient creatures with just genetic material,” says Jonathan Gray, a senior lecturer in critical infrastructure studies at King’s College London. “It tells us more than nothing, but it would be an exaggeration to say we know how they lived.”

Also, there is no single algorithm that controls Twitter. “Some will dictate what people see on their timelines in terms of trends, content or suggested following,” says Catherine Flick, a researcher in computing and social responsibility at De Montfort University in England. The algorithms that people will be primarily interested in are those that control what content appears on users’ timelines, but even that wouldn’t be very useful without training data.

“When people talk about algorithmic accountability these days, we realize that algorithms themselves are not what we want to see – what we really want is information about how they are developed,” says Jennifer Cobbe, a postdoctoral scholar. Partner at Cambridge University. This is largely due to concerns that AI algorithms can make. perpetuating human prejudices in the data used to train them. Who develops the algorithms and what data do they usecan make a significant difference in the results they spit out.

For Cobbe, the risks outweigh the potential benefits. Computer code does not give us any idea of ​​how algorithms are trained or tested, what factors or considerations are involved, or what kinds of things are prioritized in the process, so using open source may not make a meaningful difference in terms of transparency. on Twitter. Meanwhile, it can introduce some significant security risks.

Companies often publish impact reviews that research and test data protection systems to highlight weaknesses and flaws. They are corrected when discovered, but data is often corrected to avoid security risks. Open-source Twitter’s algorithms will make the website’s entire codebase accessible to everyone, potentially allowing bad actors to review the software and find vulnerabilities to exploit.

“I don’t believe for a second that Elon Musk is looking at making the entire infrastructure and security side of Twitter open-source,” says Eerke Boiten, professor of cybersecurity at De Montfort University.

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *