Archive for the ‘AI’ Category

Chatbots and responsibility

May 28, 2023

(Updated re copyright)

This is getting interesting.

Large language models (such as GPT3 and 4) generate text based on probability of what text should follow. They have no internal conception of truth. The probabilities which determine text generation are reflections of conformity and are based on weights of existing usage patterns contained within its database.

The key questions which arise are:

  1. Who “owns” copyright to the generated text?
  2. Is the language model merely a tool?
  3. Is the “user” of the tool responsible for the product or does the owner of the model share responsibility for the product (the generated text)?

The product of the use of a hammer or a screwdriver requires skill (or lack of skill) from the user. The user’s “skill” in the case of a large language model is confined to that used in posing the questions to the chatbot. The user’s skill in posing questions has little impact on the text generated.

BBC

ChatGPT: US lawyer admits using AI for case research

A New York lawyer is facing a court hearing of his own after his firm used AI tool ChatGPT for legal research. A judge said the court was faced with an “unprecedented circumstance” after a filing was found to reference example legal cases that did not exist. The lawyer who used the tool told the court he was “unaware that its content could be false”. ChatGPT creates original text on request, but comes with warnings it can “produce inaccurate information”.

The original case involved a man suing an airline over an alleged personal injury. His legal team submitted a brief that cited several previous court cases in an attempt to prove, using precedent, why the case should move forward. But the airline’s lawyers later wrote to the judge to say they could not find several of the cases that were referenced in the brief. “Six of the submitted cases appear to be bogus judicial decisions with bogus quotes and bogus internal citations,” Judge Castel wrote in an order demanding the man’s legal team explain itself. Over the course of several filings, it emerged that the research had not been prepared by Peter LoDuca, the lawyer for the plaintiff, but by a colleague of his at the same law firm. Steven A Schwartz, who has been an attorney for more than 30 years, used ChatGPT to look for similar previous cases. In his written statement, Mr Schwartz clarified that Mr LoDuca had not been part of the research and had no knowledge of how it had been carried out. Mr Schwartz added that he “greatly regrets” relying on the chatbot, which he said he had never used for legal research before and was “unaware that its content could be false”. He has vowed to never use AI to “supplement” his legal research in future “without absolute verification of its authenticity”. Screenshots attached to the filing appear to show a conversation between Mr Schwarz and ChatGPT. “Is varghese a real case,” reads one message, referencing Varghese v. China Southern Airlines Co Ltd, one of the cases that no other lawyer could find. ChatGPT responds that yes, it is – prompting “S” to ask: “What is your source”. After “double checking”, ChatGPT responds again that the case is real and can be found on legal reference databases such as LexisNexis and Westlaw. It says that the other cases it has provided to Mr Schwartz are also real.

Both lawyers, who work for the firm Levidow, Levidow & Oberman, have been ordered to explain why they should not be disciplined at an 8 June hearing. Millions of people have used ChatGPT since it launched in November 2022. It can answer questions in natural, human-like language and it can also mimic other writing styles. It uses the internet as it was in 2021 as its database. There have been concerns over the potential risks of artificial intelligence (AI), including the potential spread of misinformation and bias.

The ChatGPT response regarding copyright is a little too weaselly for my liking:

ktwop: Who owns the copyright to text generated by ChatGPT?

ChatGPT: As an AI language model, I don’t have legal ownership or rights. The text generated by ChatGPT is based on a combination of licensed data, data created by human trainers, and publicly available data. The ownership of the copyright for the individual responses or outputs generated by ChatGPT would typically depend on the specific context and how the model is being used.

In most cases, the copyright for the content generated by an AI language model would likely belong to the person or organization that owns the AI system or has the legal rights to its output. For example, if a company develops and operates ChatGPT, they would generally hold the copyright to the content generated by the model.

It’s important to note that while AI models like ChatGPT can assist in generating text, the responsibility for adhering to copyright laws and ensuring proper usage of generated content lies with the user or organization utilizing the model.

My gut feeling is that the creator/owner of the large language model must carry some responsibility (legal liability) similar to a product liability.

Cornell University. Product liability is a doctrine that gives plaintiffs a cause of action if they encounter a defective consumer item. This doctrine can fall under negligence, but it is generally associated with strict liability, meaning that defendants can be held liable regardless of their intent or knowledge.


Nothingness is not possible in a human universe

May 27, 2023

I could add quite a lot of words to this and I am not entirely happy with the “consensus” logic applied, but I would not strongly repudiate anything that was generated.

Blogging could be becoming obsolete or, I would argue, writing – including blogging – has gained a new tool.

ktwop: Discuss – Nothingness can never be real in a universe with humans.

ChatGPT: The concept of nothingness is often abstract and philosophical, and its interpretation can vary depending on the context in which it is discussed. In the context of a universe with humans, it can be argued that true “nothingness” may not exist or be experienced by humans due to our innate perception and understanding of reality.

From a philosophical standpoint, nothingness is often understood as the absence of something or the lack of existence. However, it is challenging to truly grasp or conceptualize nothingness since our perception is fundamentally rooted in the existence of things. Our sensory experiences and cognitive processes are constantly engaged with the world around us, making it difficult to imagine a state of absolute non-existence or nothingness.

In a universe with humans, even when we perceive emptiness or voids, they are still defined in relation to the existence of objects or entities. For example, an empty room is still considered empty because we can contrast it with rooms that are occupied or filled with objects. Our understanding of nothingness is thus dependent on the presence of something and our ability to differentiate between existence and non-existence.

Moreover, from a scientific perspective, the laws of physics dictate that matter and energy cannot be created or destroyed, but they can be transformed or rearranged. This principle, known as the conservation of energy, suggests that there is always something rather than absolute nothingness. Even in seemingly empty spaces, there are still particles and energy fluctuations at the quantum level.

From a psychological standpoint, humans also tend to assign meaning and significance to the concept of nothingness. We often associate it with emptiness, absence, or non-existence, which can be unsettling or challenging for our minds to comprehend. The existential nature of human consciousness leads us to ponder the nature of reality and our place within it, but it is difficult for us to conceive a reality devoid of existence or experience.

In conclusion, while the concept of nothingness can be contemplated and discussed philosophically, it is challenging to argue that true nothingness can exist in a universe with humans. Our perception, understanding, and experience of reality are inherently rooted in the existence of things, and even seemingly empty spaces are not devoid of particles and energy. The human mind and our cognitive processes make it difficult to truly fathom or experience nothingness in its absolute sense.

The key to the proper use of chatbots as writing tools would be if a human

  1. could legitimately claim ownership of what was generated, and
  2. took ownership of what was generated.

I can claim ownership of the question I posed to ChatGPT but I am not sure I can claim ownership of what is generated – even if under my Username.


Testing ChatGPT /2: Large language models conflate conformity with truth

May 3, 2023
  1. Large language models (such as GPT3) generate text based on probability.
  2. They can have no internal concept of truth.
  3. Probabilities are reflections of conformity and are based on weights of existing usage patterns contained within its database.
  4. Training is about giving additional weight as human feedback on the positivity or negativity of what has been generated.
  5. Generated text gives high values to conformity with its own database and with the training (feedback) received. 
  6. High conformance is conflated with truth.

Of course, this is a common feature in human interactions as well. Many people having a particular opinion gives that opinion a higher perceived truth value even if that opinion is utterly wrong or false. The deification of “the many” has warped our notions of what is truth.

But the fuzziness around truth could be a particular issue with large language models and for AI. Truth is not an inherent, internal concept of such models. Truth becomes relative (to the views of the trainers) and conformity is conflated with truth.


ktwop: Do large language models generate probabilistic text?
ChatGPT: That’s correct! Large language models generate text using probability distributions. These models are trained on massive amounts of text data and learn to predict the likelihood of certain words or phrases based on the context of the surrounding text.
When generating text, the model samples from these probability distributions to determine which word or phrase to use next. Because the model is probabilistic, it can generate different outputs for the same input, making the generated text more diverse and interesting.
However, it’s important to note that while large language models like GPT-3 are very good at generating coherent text, they are not perfect and can still produce errors or nonsensical output. It’s also important to be aware of ethical concerns surrounding the use of such models for potentially harmful purposes such as generating fake news or propaganda.
ktwop: How can the output from a language model be given a truth value?
ChatGPT:The output from a language model can be given a truth value by comparing it to a reference or ground truth text. The reference text can be a sentence or a paragraph that is known to be true or false, depending on the task at hand. For example, if the task is to generate a factually correct sentence, the truth value of the output can be determined by comparing it to a reliable source of information, such as a verified news article or a trustworthy database. If the output matches the reference text, it can be considered true, and if it does not, it can be considered false.
Similarly, for tasks such as sentiment analysis or topic classification, the truth value can be determined by comparing the output to a set of labels that are already defined as true or false. For example, if the task is to classify a text as positive or negative, the truth value of the output can be determined by comparing it to a labeled dataset where each text is already classified as positive or negative.
In summary, the truth value of the output from a language model can be determined by comparing it to a reference text or a set of labeled data that is already known to be true or false.
 

 


%d bloggers like this: