The microarchitectural design space of a new processor is too large for an architect to evaluate in its entirety. Even with the use of statistical simulation, evaluation of a single configuration can take an excessive amount of time due to the need to run a set of benchmarks with realistic workloads. This paper proposes a novel machine-learning model that can quickly and accurately predict the performance and energy consumption of any new program on any microarchitectural configuration. This architecture-centric approach uses prior knowledge from offline training and applies it across benchmarks. This allows our model to predict the performance of any new program across the entire microarchitecture configuration space with just 32 further simulations. First, we analyze our design space and show how different microarchitectural parameters can affect the cycles, energy, energy-delay (ED), and energy-delay-squared (EDD) of the architectural configurations. We show the accuracy of our predictor on SPEC CPU 2000 and how it can be used to predict programs from a different benchmark suite. We then compare our approach to a state-of-the-art program-specific predictor and show that we significantly reduce prediction error. We reduce the average error when predicting performance from 24 percent to just seven percent and increase the correlation coefficient from 0.55 to 0.95. Finally, we evaluate the cost of offline learning and show that we can still achieve a high coefficient of correlation when using just five benchmarks to train.