Studies of implicit learning often examine peoples’ sensitivity to sequential structure. Computational accounts have evolved to reflect this bias. An experiment conducted by Neil and Higham [Neil, G. J., & Higham, P. A.(2012). Implicit learning of conjunctive rule sets: An alternative to artificial grammars. Consciousness and Cognition, 21, 1393–1400] points to limitations in the sequential approach. In the experiment, participants studied words selected according to a conjunctive rule. At test, participants discriminated rule-consistent from rule-violating words but could not verbalize the rule. Although the data elude explanation by sequential models, an exemplar model of implicit learning can explain them. To make the case, we simulate the full pattern of results by incorporating vector representations for the words used in the experiment, derived from the large-scale semantic space models LSA and BEAGLE, into an exemplar model of memory, MINERVA 2. We show that basic memory processes in a classic model of memory capture implicit learning of non-sequential rules, provided that stimuli are appropriately represented.