 
Summary: On Basing LowerBounds for Learning on WorstCase Assumptions
Benny Applebaum
Boaz Barak
David Xiao
Abstract
We consider the question of whether P = NP implies that there exists some concept class that is
efficiently representable but is still hard to learn in the PAC model of Valiant (CACM '84), where the
learner is allowed to output any efficient hypothesis approximating the concept, including an "improper"
hypothesis that is not itself in the concept class. We show that unless the Polynomial Hierarchy collapses,
such a statement cannot be proven via a large class of reductions including Karp reductions, truthtable
reductions, and a restricted form of nonadaptive Turing reductions. Also, a proof that uses a Turing
reduction of constant levels of adaptivity would imply an important consequence in cryptography as it
yields a transformation from any averagecase hard problem in NP to a oneway function. Our results
hold even in the stronger model of agnostic learning.
These results are obtained by showing that lower bounds for improper learning are intimately related
to the complexity of zeroknowledge arguments and to the existence of weak cryptographic primitives.
In particular, we prove that if a language L reduces to the task of improper learning of circuits, then,
depending on the type of the reduction in use, either (1) L has a statistical zeroknowledge argument
system, or (2) the worstcase hardness of L implies the existence of a weak variant of oneway functions
defined by OstrovskyWigderson (ISTCS '93). Interestingly, we observe that the converse implication
