Honesty and Artificial Intelligence

Suppose we create an artificial intelligence capable of producing a logical, functional proof for some claim about the physical world. Such an AI would need to have first modeled enough language to understand our messy world in recognizable terms. It would further need to know enough to propose statements about our world, and need to grasp enough logic to derive new statements from given statements and rules.

Assume that the AI mechanically produces natural language proofs, and that its skills and reasoning do not translate to other domains. That is, it has not sufficiently generalized its understanding for us to consider it sentient. Machine learning tools could potentially produce such an “AI,” and given the limitations placed on it, we could decide that such an AI lacks higher intelligence, and is therefore a tool of its user. The user, after all, must provide context for any proof they produce with such this tool.

Now consider the possibility that we ask the AI to prove the validity of the statement “rational agents have an ethical obligation to not lie.” We give it certain prepositions, such as from Kantian ethics: “rules, such as the statement in question, should apply universally.”[1] A user equipped with this hypothetical tool could possibly provide answers to significant philosophical questions. What if the user decided that one such proof attained sufficient quality to warrant its publication - not necessarily in an academic journal, but perhaps on a blog. Said user decides to publish the proof without explaining its source. In other words, without giving credit to the AI for discovering the proof. Would its publication be honest? Would it be deceitful? What if the proof were trivial? What if it involved substantial legwork?

What if the proof were significant enough to warrant its publication in an academic journal, and the user again opted to take full credit for its discovery? The proof was, after all, discovered by clever application of existing tools. A particularly sentimental carpenter might give credit to their hammer as it contributed to the construction of a house, but this comment would typically be perceived as humorous rather than heartfelt. Moreover, we would not consider the carpenter dishonest if they neglected to recognize the role the hammer played in the house’s construction.

What if the user, in a slightly different context, submits the proof as part of their PhD work, and succeeds in their defense? Now the user has earned significant merit for work they may or may not truly have accomplished.

What role does academic honesty play in the rapidly-approaching world of general AI, where one could clamp the parameters of such AI to yield a mere tool, capable of reason but incapable of demonstrating sentience? Does our definition of sentient or conscious allow for an AI of such utility to be considered mechanical and unaware? Could such limitations even be placed on an AI?

[1] http://www.csus.edu/indiv/g/gaskilld/ethics/kantian ethics.htm

 
6
Kudos
 
6
Kudos

Now read this

Databases

Our databases are inflexible. Our databases are designed for specific tasks, even if they’re intended to be general purpose. They make decisions for us, choosing how consistency, availability, and partition tolerance fit together. By... Continue →