Here’s What It Means For The Future

If an AI were to be able to create its own language entirely, this could surely spell uncertainty for the future. After all, nobody wants to let loose a self-replicating, language-encrypting AI that could go rogue and begin shutting down critical parts of our infrastructure (such as the internet). The good news is that researchers don’t seem to believe that’s the primary threat with the experimental and largely inaccessible DALL-E 2 (which already has a counterpart version available for the general public called DALL-E Mini).

Snowswell noted in his report that forcing the AI to spit out images with captions attached resulted in strange phrases that could then in turn be inputted to create predictable images of very specific things. There are a number of reasons why this could be happening. Snowswell suggested that it could be a mixture of data from several languages informing the relationship between characters and images in the AI’s brain, or it could even be based on the values held by tokens in individual characters. 

Snowswell went on to say that the concern isn’t about whether or not DALL-E 2 is dangerous, but rather that researchers are limited in their capacity to block certain types of content. It sounds like anyone could bypass banned words to generate offensive content based on the computer’s secret language, which isn’t well understood yet, and that could be the problem that keeps DALL-E 2 from reaching the public’s hands in its purest form, at least for now.



Source link

Leave a Comment