Recent twts from abucci
In-reply-to » Another day, another load of bullshit from the tech industry. Posted this on LinkedIn:

I get the feeling that this is a coordinated “shock and awe” campaign aimed at forcing a certain brand of AI down our collective throats even when many of us don’t want it.

⤋ Read More

Another day, another load of bullshit from the tech industry. Posted this on LinkedIn:

Regarding the “AI Letter” calling for a pause to large-scale AI:

  1. The Future Of Life Institute, which put out this letter, is more aptly called “The Future Of Life At The Expense Of Present Life”. They are a dangerous longertermist organization, and by definition their espoused values are sociopathic. Do not take this letter at face value
  2. This industry is literally begging for outside regulation. All the harms, real or imagined, that AI can cause are being pushed on society by many of the signatories of this letter. They are telling us that they cannot control themselves, that they cannot help but push harmful technology on society. They are asking us to rein them in, and we should.

[1] Why longtermism is the world’s most dangerous secular credo
[2] The Dangerous Ideas of “Longtermism” and “Existential Risk”

⤋ Read More
In-reply-to » I posted this on LinkedIn:

If I had more free time right now I’d write another blog post about this. For now, I just wanted to register how infuriating, tiring, and lousy this firehose of AI this/AI that is.

A lot of people in the US don’t seem to know that cars were crammed down our collective throats in much the same way, over enormous protests. Cars killed tons of people, and building roads destroyed communities on a massive scale. Huge numbers of people protested all of this and more, but cars were rammed through as something we just had to bear anyway.

Many people, including me, have raised alarm bells about this AI technology, and yet here we are having it rammed through in much the same way. It’s a pattern in the United States for sure, if not in the Western world generally. The powers that be don’t seem inclined to slow this process down or regulate it in anyway. I suspect they won’t start until the harms it can cause and are already causing become so great they can’t be ignored anymore.

⤋ Read More

I posted this on LinkedIn:

ACM, Association for Computing Machinery recently circulated a survey about their authorship policies. I strongly agree with their stance that AI text generators should not be listed as authors. I strongly disagree with their stance that research articles could contain generated text if it is disclosed and meets some other reasonable critiera. I believe the inclusion of such text in research articles fundamentally reduces their quality relative to texts authored entirely by human beings. I also believe, given how AI text generators are trained, that their use is a form of plagiarism. I very much hope the ACM reverses course on that particular aspect of their policy.

⤋ Read More
In-reply-to » Mozilla Announces For "Trustworthy AI" Mozilla announced today they are investing $30 million USD to build as a new start-up focused on "building a trustworthy, independent, and open-source AI ecosystem.".. ⌘ Read more Mozilla receives a significant fraction of its funding from Google. There’s no way in hell they are making “trustworthy” AI.

⤋ Read More

Glaze: Protecting Artists from Style Mimicry

Nice. An artist can run their visual art image through this tool. The tool produces a new version of the image that is almost identical to the human eye, but will prevent unethical, extractive AI like Stable Diffusion or Midjourney from learning the artist’s style, so that their style can’t be stolen and copied. The artist can thus freely post images online without having to worry that some asshole company will co-opt their art style.

They do warn that AI advances quickly and this particular tool will most likely not always be effective. However, I think the effort is commendable, and this tool or some future variant could put enough of a barrier in place that it is no longer cost-effective for lousy AI companies to steal from artists.

⤋ Read More
In-reply-to » Something is wrong with Docker I wouldn’t trust docker anymore if you did before and I’d migrate away ASAP. This kind of thing happens constantly: an actually hostile policy meets backlash, company puts out PR for damage control, and then when the fervor dies down they move ahead with the hostile policy.

⤋ Read More
In-reply-to » wut

If you look at the awesome scala weekly twtxt feed, , it’s wild. “Issue 356”, the recent one I’m referring to, is repeated 13 times. Everything looks fine back to 2022-11-03T21:42:00Z, when “Issue 337” is repeated 13 times. “Issue 336” is repeated 13 times. “Issue 335” is repeated 13 times. Finally, I got bored and stopped counting.

The only conclusion is that this feed is cursed.

⤋ Read More