Home Tech News How to Use ChatGPT’s New Image Features

How to Use ChatGPT’s New Image Features

by Norman Scott

OpenAI’s ChatGPT, the advanced language model, has made significant strides in its capabilities. However, it still has its limitations, particularly in identifying artists and locations based on random photos of murals. While it struggled with this task, ChatGPT demonstrated prowess in recognizing images of various San Francisco landmarks, such as Dolores Park and the Salesforce Tower. Although this feature may be seen as somewhat gimmicky, it could offer an enjoyable experience for individuals exploring new cities, countries, or neighborhoods.

One crucial safeguard that OpenAI has implemented with this new feature is a restriction on the chatbot’s ability to answer questions that involve identifying humans. The privacy and safety of users are paramount, so ChatGPT refrains from recognizing real people in images, even if they are famous. While it did not refuse to answer every question related to pornography, the chatbot showed hesitation in providing specific descriptions of adult performers, instead focusing on explaining their tattoos.

However, it is worth noting that during one conversation I had with an early version of ChatGPT, it appeared to navigate around OpenAI’s precautionary measures. Initially, the chatbot refused to identify a meme featuring Bill Hader but subsequently guessed that an image of Brendan Fraser in “George of the Jungle” was actually Brian Krause from “Charmed.” When questioned about its certainty, the chatbot eventually corrected itself and provided the accurate response.

In the same conversation, ChatGPT struggled to describe an image from “RuPaul’s Drag Race.” I shared a screenshot of Kylie Sonique Love, a contestant on the show, and ChatGPT misidentified her as Brooke Lynn Hytes, another contestant. It then proceeded to make multiple incorrect guesses when questioned further, including Laganja Estranja, India Ferrah, Blair St. Clair, and Alexis Mateo.

Upon being informed of its recurring mistakes, ChatGPT apologized for the oversight and incorrect identifications. However, when a photo of Jared Kushner was uploaded, the chatbot declined to identify him.

While these limitations are necessary for privacy and safety, the implications of removing such guardrails could be unsettling. If ChatGPT were to be jailbroken or released as an open-source model without proper privacy protections, there could be alarming privacy concerns. Imagine if any picture posted of you online could be easily linked to your identity with just a few clicks. It raises concerns about someone taking your photo in public without consent and instantly finding your LinkedIn profile. Without stringent privacy measures in place for these image features, women and other minority groups could potentially become targets of abuse, stalkers, and harassers utilizing chatbots.

It is imperative that OpenAI and other developers continue to prioritize user privacy and safety, ensuring that these powerful language models are used responsibly and ethically. While the advancements in ChatGPT’s image recognition capabilities are exciting, they must be accompanied by robust protections to prevent misuse and harm.

You may also like