Honesty and Artificial Intelligence
Suppose we create an artificial intelligence capable of producing a logical, functional proof for some claim about the physical world. Such an AI would need to have first modeled enough language to understand our messy world in recognizable terms. It would further need to know enough to propose statements about our world, and need to grasp enough logic to derive new statements from given statements and rules.
Assume that the AI mechanically produces natural language proofs, and that its skills and reasoning do not translate to other domains. That is, it has not sufficiently generalized its understanding for us to consider it sentient. Machine learning tools could potentially produce such an “AI,” and given the limitations placed on it, we could decide that such an AI lacks higher intelligence, and is therefore a tool of its user. The user, after all, must provide context for any proof...