The University of Chicago has developed a set of protection tools that can counter the AI platform by “reading pictures” to simulate the style of painting – Computer King Ada
In the past year, the “AI art” automatically generated by artificial intelligence technology has gradually become popular on all major platforms. However, this has also led to many artists who create by hand can only helplessly watch their creations become these The source of the AI platform machine learning system generates new works that are close to their work style without their consent or marking the source at all.
According to a Kotaku report, a research team from the University of Chicago teamed up with a group of artists to come up with a countermeasure that has the potential to allow art creators to protect their work. This set of tools is called “Glaze”, and it works by automatically adding an extra layer that is almost invisible to the naked eye on the completed art drawing. Interestingly, this extra layer is not a random shape, but contains a complete art map, which is very close to the structure of the original image, but has a completely different style of painting. Although this layer is almost invisible to the human eye, any machine learning platform that tries to capture the source of the reference image will take it into its judgment and get confused in the process of studying the details of the art image, eventually generating a completely different style from the original image. finished product.
In terms of design, Glaze is specifically locked on those machine learning platforms that allow users to automatically generate art drawings by referring to the creative style of human artists by inputting specific “prompt words”. For example, users on those platforms can ask the system to generate an art map in the style of artist Rolf McQuarry, and then the system will pull a large number of Rove McQuarrie creations from the Internet to simulate his style. The style of painting generates lifelike art pictures. But by adding an extra layer to the creation, Glaze can disrupt the interpretive process for those platforms.
Using the work of artist Karla Ortiz as an example, the development team explained:
“The current Stable Diffusion platform can learn to simulate her style of painting as long as you have seen a few original works created by Karla Ortiz (extracted from Karla’s online portfolio). However, if Karla uses our tools to adjust her Works, add a little change before publishing in the online portfolio, then Stable Diffusion will not be able to imitate her painting style, but will judge her work as a different painting style (like Van Gogh). And when someone is in Stable When you enter the prompt word “art picture of Karla Ortiz’s style” on the Diffusion platform, you will see the style of Van Gogh (or a hybrid type). This can protect Karla Ortiz’s style from unauthorized easily copied.”
Sadly, this rather novel protection still doesn’t seem to be able to restore the vast amount of artwork that these platforms have already ingested into their databases. But at least until those systems figure out new ways to counter such countermeasures, artists around the world can use this tool to protect their latest creations that they plan to publish online. As for how long this method can last, it is still uncertain. Even the Glaze team itself admits that this is not a one-and-done solution, because of the elusive evolutionary speed of these AIs, and tools like Glaze are bound to face more challenges in the future.
Users who want to try the Glaze tool can go to the official website to download it, and they can also see the team’s complete academic paper.
View All Result
Welcome mobile phone manufacturers, iPhone peripheral product manufacturers, APP software developers to discuss cooperation or product testing matters koc kocpc.com.tw ｜Maintenance of host: Fast Line Taiwan
Thumbnails for this site are managed by ThumbPress