Show newer

Yes please!

There's no point of boasting green credentials when we allow to force users into buying a new phone or tablet every 1-2 years.

Planned obsolescence is the negation of sustainability. Devices that are hard to repair should never ever touch the European soil again.

Tossing away a device made of metals, glass, silicon and rare earths and buy a new one just because the producer won't provide us with spare parts (or even allow us to remove the back cover) is immoral on so many levels. And it doesn't benefit anyone except the profits of the producer (so much for the myth of capitalist profits aligned with the majority's interests and priorities).

Make devices that can easily be repaired, provide repair instruction and spare parts, or don't even sell your shit on my continent.

A single `solana program close` command is sufficient to take down a whole with no chance of recovery - and no warning given to users before they issue the command, apparently.

A single command permanently wiped the OptiFi's Blockchain, and $661K with it.

If cryptocurrencies really aim to replace real currencies, they'd better make sure that they abide to a proper regulation that prevents financial records and accounts to vanish like this. If you run an old-fashioned bank, you can't just wipe all of your transaction records with a single command - with no backups in place and no way to recover from failure.

I got tired of putting my trust search engines that run on somebody else's machine.

DuckDuckGo blocks all trackers, but it silently whitelists Microsoft's (because of the commercial agreement they have with Bing), except that they failed to mention it until they were caught with their hands in the jar.

Startpage is Dutch, it's notoriously sensitive on privacy and it's been around for longer than Google, but it's now been acquired by an ads company and I don't see things going well for it.

Brave claims to be the ultimate solution for privacy, except when they put some crypto miner to run in your browser so they can make a bit of extra money on the side.

So I've decided to take even this matter into my hands and run my own search engine. You can access it at It runs SearXNG in a Docker container on one of my servers at home. It's a bit slower than major search engines, but not that much, and it's still a little price I'm ready to pay for freedom.

"The lawsuit, which had been seeking to be certified as a class action representing Facebook users, had asserted the privacy breach proved Facebook is a data broker and surveillance firm, as well as a social network".

The guys at have done a good job explaining the ongoing feud between and the on data sharing in this new video.

The US has repeatedly failed to comply with a framework that should ensure that private data about EU citizens collected by American businesses doesn't get shared with federal agencies without a valid reason (such as international criminal investigations).

In absence of a clear legal framework for data sharing between the EU and the US, Facebook is left with two choices: either they leave the EU, or they *federate*.

In this specific case, federation doesn't really mean what we mean when we talk of it on the Fediverse. It means more a business spawning an operative branch in another territory, collecting and retaining data only in that territory (and therefore subject to that territory's jurisdiction), and sharing only the data that is strictly required for operational purposes with its parent company.

But hey, if this definition of "federation" could be expanded to "an environment where individual platforms (not necessarily within the same organization) can communicate and share data with one another through open protocols, all through transparent user consent and granular access control", I wouldn't mind it at all!

Interestingly (and unsurprisingly), Facebook doesn't seem to be much interested in this second possibility.

In other words, they'd lose 28% of their global revenue by leaving the EU, rather than opening up their protocols and federating (or at least keep the data close to where it's been produced). Because they're very well aware that the second option would create a precedent that may push even more countries to adopt a similar approach, and Facebook would quickly lose its monopoly over social platforms.

That's exactly why we need to keep pushing for the right regulation. I mean, I wouldn't mind it if Zuckerberg leaves my continent for good, my life won't change at all, but the lives of millions (and many businesses are among them) will change. So I'd rather keep them here and force them to comply. Because compliance means a better service for everybody around the globe, not only Europeans.

We have enough data to prove that has been collecting and reselling each piece of information they could about almost every single person on earth without ever asking for their consent.

But we've got the same problem again: an American company can't be brought in front of a federal judge for violating user privacy because, unlike Europe with its GDPR, the US has no federal law thy protects consumers' privacy. It's up to individual States to implement it. The US needs a nation-wide privacy law and it needs it NOW. The lack of such a legal framework causes harm to the privacy of people even outside of its borders.

By simply taking over the company that makes the most popular product in the industry, can buy its way 5o the top and grow its monopoly without actually having to out-innovate and out-compete its rivals.

Amazon represents everything that is wrong with today's rotten, degenerate and decadent where healthy competition and wide choice for the customer has been replaced by a few monopolies that have grown even more powerful than governments and regulators. And it's good that even traditionally liberal news outlets like the Atlantic are acknowledging the problem.

Such a bummer that Dugin Sr. (a hard-right nationalist, neo-fascist and anti-Western "philosopher" whose solution for Ukraine is "kill them, kill them, kill them") wasn't on that car.

Btw, I believe that has hit a philosophical wall that technology alone cannot move.

When I started working on AI in the mid 2000s, it was still all about expert systems, decision trees that modeled first-order logic, onthologies, and graph exploration to come up with a best strategy to solve a problem.

That generation of AI could already solve impressive problems, but it was limited by the amount of knowledge that humans could put into it to describe all the possible combinations of a complex problem, or all the possible logically valid propositions, or all the grammar rules of a language.

It was a purely reasoning-based AI. Deterministic, reliable, but its utility was constrained by the amount of logical and algorithmic rules that humans could put into it.

Then computing became cheaper and more scalable, data for big corporations became cheaper and scalable as well, and neural networks, largely forgotten for nearly 25 years, got their moment. We suddenly got statistical systems that could figure out patterns and rules from labeled data, without a human explicitly encoding them into a graph. And we really thought that we had solved the problem of AI. But then you get systems that can recognize a human and a stop sign individually, but don't know what to do when they are together - because it was never trained to deal with such an unusual combination, or even told what the real meaning of a stop sign is.

Deep learning trashed away decades of reasoning-based expert systems to focus on empirical models trained through statistical pattern matching, but in doing so it created parrots that can talk about anything without even understanding what they're talking about.

It's again the long-lived clash between rationalism and empirism. In spite of the technology, these problems have been around at least since the times of Plato. Do we through deduction (we learn the basic building blocks of reality, and then we learn how to logically connect them together in increasingly complex structures), or do we learn through experience (by observing and replicating things again and again, measuring the feedback, and gradually converging towards a local optimum that statistically minimizes the odds of error)?

Well, it turns out that we may need both, but we can't make such a big theoretical leap in understanding how machines (and even humans) learn while the whole field is in the hands of a handful of companies mostly interested in doing small iterative improvements over their existing imperfect models, with little to no incentives to take big risks required to really push the industry forward.

Show thread

The biggest teams of researchers in AI are no longer to be found in the academy, where peer review used to be coin of the realm, but in corporations. And corporations, unlike universities, have no incentive to play fair. Rather than submitting their splashy new papers to academic scrutiny, they have taken to publication by press release, seducing journalists and sidestepping the peer review process. We know only what the companies want us to know.

In the software industry, thereโ€™s a word for this kind of strategy: demoware, software designed to look good for a demo, but not necessarily good enough for the real world. Often, demoware becomes vaporware, announced for shock and awe in order to discourage competitors, but never released at all.

The central challenge going forward is to unify the formulation of learning and reasoning. There can be no deep learning without conceptual reasoning. You canโ€™t deal with a person carrying a stop sign if you donโ€™t really understand what a stop sign even is.

Imagine if some extraterrestrial studied all human interaction only by looking down at shadows on the ground, noticing, to its credit, that some shadows are bigger than others, and that all shadows disappear at night, and maybe even noticing that the shadows regularly grew and shrank at certain periodic intervalsโ€”without ever looking up to see the sun or recognizing the three-dimensional world above.

Itโ€™s time for artificial intelligence researchers to look up. We canโ€™t โ€œsolve AIโ€ with PR alone.

I usually never miss a chance to bash , but they are right in this case.

Aggregating and presenting links doesn't make you a publisher. Your algorithms are simply fetching and presenting content, they aren't the writer of that content. So, in case of defamatory content published on Google News, Google is liable of defamation as my RSS aggregator or your browser.

Thinking of it, if a defamatory article was written on a newspaper back in the times where people actually read physical newspaper, nobody would have thought of suing the newsstand that sold you the paper instead of the author of the article itself.

If you wanted the online service X to be ads-free and not collect and resell your personal data, how much should you pay to make up for their lost revenue?

It's a question has been roaming in my head for a while, and today I've decided to try and collect some rough data to come up with a price tag.

's ads revenue in 2021 was almost $115B. The platform has about 1.9B daily active users, and 2.9B monthly active users.

($115e9 / 2.9e9) / 12 = $3.3 per month

Do we want to assume instead that most of the ads revenue comes from the daily active users, and the fraction from those who login every other week or so is negligible? No problem:

($115e9 / 1.9e9) / 12 = $5 per month

Would you be happy to pay up to $5 to use a social network that connects you to the whole world, but that doesn't hoard your personal data like a junkie, doesn't resell your personal data to controversial actors like Cambridge Analytica, and doesn't profit from targeted ads that push people towards extremism?

Many prefer to see a few ads and don't mind what is done with their data, as long as the service is free. Many, instead, would surely prefer to pay $5 a month, if that's the price to pay to buy Facebook's respect for your data.

The main problem is: why aren't we even given that choice?

Everybody hates ads and wants to block ads - only as long as it's someone else, not you, the one who is making profit from them.

I hate my industry and I hate the way it is funded, with no exceptions. Many decades down the line, and the answer to "how do we increase revenue?" is still "just shovel more ads down the customers' throat", or "just collect more user data so you can shovel more targeted ads down the users' throat".

Companies like may make some extra revenue from their sales of overpriced gadgets, and therefore they could afford to build a narrative like "we're not Google and Facebook - we care about your privacy and we care about an experience with technology that is not overwhelmed by ads".

But, as their sales of iPhones and iPads decline amid supply shortages and market saturation, even Apple has to fall back on aggressive advertisement policies, and pretend that they never said anything they said about ads in the past 5 years.

How did we let the IT industry become so subjugate to the ads industry that nobody seem to even bother considering other business models?

We finally have a way to tinker the John Deere tractors, even if the company has always been one of the strongest opponents of the .

It turns out that with a simple piece of hardware that bypasses the DRM checks you can get a beloved Linux root terminal. The nice thing is that the company has no way of fixing it, other that releasing new tractors with full disk encryption.

Of course, the first thing that they did was to run Doom on the tractor.

RT @AlzogliOcchi: Malato da tempo e sapendo che la fine era vicina, Piero Angela ha voluto lasciare un messaggio di commiato che รจ stato postato dopo la sua morte sul profilo Instagram di "SuperQuark". #13agosto #PieroAngela


Quayside, in Toronto, was supposed to be the leading proof of concept for smart cities in the Western world.

Google's parent company planned to invest almost $1B in autonomous garbage collection, self-driving taxis, and an extensive data collection layer - from bench occupation, to pedestrian crossing monitoring, to public transport live usage data.

The problem is that Alphabet failed to address all the concerns of the locals about privacy and data governance - and in many cases it even actively dismissed or derided them. The local government even mentioned several episodes of arrogance shown by Alphabet's representatives. So, two years and a half after its start, the project has been officially terminated.

This is a good example of how citizens can push back on surveillance capitalism. Smart cities have plenty of potential to improve lives. Collecting more accurate live data leads to better understanding of the problems and therefore better governance. But the collected data is extremely valuable, and it shouldn't solely lay in the hands of a private corporation. Especially if that corporation dismisses valid concerns about user privacy and data gatekeeping as technophobia.

These technologies should be controlled by elected officials, because they are the ones accountable (through the democratic process) for their correct usage. Representatives that misuse the data, or sell private citizens' data to 3rd-parties, are likely to be voted out. You can't say the same about Alphabet. Nobody voted for them, nobody can vote them out if they misuse the data, they aren't accountable to anyone other than their shareholders, and therefore they are not eligible for handling something as precious as the data flows of our cities.

And there's also a disturbing lesson to be learned here: companies like Google would rather kill billion-dollar projects and not improve our cities, rather than losing control over the data, or even just starting a conversation about data usage and accountability. Their interest is not to make the world a better place: it's to maximize profit. Improving things sometimes comes as an aftermath of their profit-seeking strategies, but sometimes it doesn't.

This is why we NEED more open mobile tech. Any step taken by and in the opposite direction is a crime against humanity that hinders talent and innovation, and it benefits nobody but their account sheets.

I'm proud of this boy from Zambia showing a cheap phone running Termux+neofetch.

He learned to code on that Android phone, in a corner of the world where only the very wealthy can afford a personal computer.

He now runs a Twitter account where he regularly posts about cybersecurity, with a particular focus on malware analysis and reverse engineering.

Open-source software like that enable this guy to run a Linux-like system on a cheap phone are under constant threat. At every new release, the Android environment becomes more and more closed, requiring an increasing number of steps in order to install software that provides the degree of freedom that Termux does. Termux itself has recently been forced to target a lower version of the Android SDK because of new limitations on executing files from an app data folder introduced in Android 12. The app gets pulled from the Play Store for a variety of reasons every now and then, and it may not work at all on future versions of Android.

Not to mention the lack of funding that this software gets (mostly from voluntary one-shot donations), and the frequent episodes of burnouts among FLOSS developers who are overwhelmed by the work required to build and maintain software without getting any rewards.

It should be Google's responsibility to make sure that an African kid gets the same opportunities of becoming an engineer or a scientist as a Western one. Instead, in the best case scenario, they ship them a container of Chromebooks with their proprietary and closed software, no way of tinkering with it, and they act like they have made the world a better place.

So whenever Google or Apple decide to force you to buy a new device through planned obsolescence, or whenever they restrict the possibility for users to tinker with devices that they have purchased, remember that their evil isn't only targeting people like me - Western white guys with enough disposable income to afford a personal computer to do all the tinkering.

The main victims of their strategies are people like this guy, who would have never learned to code and may have never gotten a chance to land into a good job, if it wasn't for a cheap Android phone that could run a Linux-like environment.

To the hardcore capitalists out there: this is the kind of capitalism that I'd like to see more.

Our capitalism is sick because it degenerated into a few isolated monopolies and oligopolies with little to no incentive to further innovate.

Innovation in a saturated market where all the quick wins have been exploited is either risky or expensive, and if you have no mechanisms of natural selection (i.e. competition from smaller players) then the only interest of those monopolies will be to defend their revenue.

Why on earth would I care to bring broadband internet to a small rural community if the ROI is likely to be negative, when I can simply squeeze more revenue from the low hanging fruits that I have by providing more services to a mostly urban, wealthier and already well-serviced marker?

And here is where we have a problem. We have defined some things (like access to drinkable water, electricity or broadband internet) as inalienable human rights, at least in the West. But then we have delegated the implementation of those tasks to private companies whose interest is to maximize their profit margins, not to provide the service to everyone.

If we define something as a basic need, then we need to make sure that that need is fulfilled even when its implementation is not profitable. And if large monopolies weigh their ROI more that their ability to provide more people with a better service (another capitalist myth that is broken more often than it is confirmed), then the government should do everything in its powers to reward the small players and make sure that they get large enough to challenge the large monopolies - at least on a local scale.

Show thread

A quite disturbing note: the JS code that Facebook injects in every webpage through their in-app browser ( is basically a spartan spyware.

You can think of it as a Greasemonkey user script that does the following:

- It gets injected into every DOM
- It attaches a callback to every <button>, <a> and <form> element in the page, whose purpose is to send a request to the Instagram GraphQL endpoints with some encrypted payload on every click/submit events
- It attaches tracking parameters to every opened URL

Why don't we just call these things with their appropriate names? How did we get to a point where a company can inject spyware in every web site through an app used by billions and get away with it?

If you still want to use the Facebook and Instagram apps, at least do yourself a favour and use PiHole as a DNS sinkhole, so domains like and all the faecal matter they contain get flushed down the right hole.

Using these apps without blocking the trackers that they inject is like shagging the whole world without wearing a condom.

Show thread

I've followed with interest the story of this guy when he started about a year ago. And I'm impressed by how much progress he has made. This is a good example of "own your own tech" taken to the next level.

When the pandemic started, Jared Maunch, just like everyone else, needed more broadband internet in order to move his activities online.

There was one problem though: Maunch lives in a rural community in Michigan with no broadband internet. After a lot of insistence, Comcast proposed that Maunch should pay them $50k in order to cover the cost of a cable laid from their nearest station to his house, while AT&T could only guarantee a 1.5 Mbps connection.

So he took things into his hands, he negotiated contracts with companies providing optical fiber and wires at scale, he laid off the cables himself, and he started his own ISP.

He's now selling 100 Mbps connections to his rural neighbours for $55 a month, and he's got hundreds of households already connected. Hundreds of people who, without him, would have never had access to broadband connections.

Of course not all the rural and under-serviced area are lucky enough to have an all-resourceful engineer who takes initiative to start his own ISP after the major connectivity gatekeepers dismissed the project as too expensive for them. But his success story should inspire other people to do the same. Starting an ISP, after all, is a matter of connecting network devices that in most of the cases already exist, and there's no reason why the industry should be a walled garden only inhabited by a couple of players.

Show older

A platform about automation, open-source, software development, data science, science and tech.