The guide outlines the main evaluation challenges associated with ALMP’s, and shows how to obtain rigorous impact estimates using two leading evaluation approaches. The most credible and straightforward evaluation method is a randomized design, in which a group of potential participants is randomly divided into a treatment and a control group. Random assignment ensures that the two groups would have had similar experiences in the post-program period in the absence of the program intervention. The observed post-program difference therefore yields a reliable estimate of the program impact. The second approach is a difference in differences design that compares the change in outcomes between the participant group and a selected comparison group from before to after the completion of the program. In general the outcomes of the comparison group may differ from the outcomes of the participant group, even in the absence of the program intervention. If the difference observed prior to the program would have persisted in the absence of the program, however, then the change in the outcome gap between the two groups yields a reliable estimate of the program impact. This guideline reviews the various steps in the design and implementation of ALMP’s, and in subsequent analysis of the program data, that will ensure a rigorous and informative impact evaluation using either of these two techniques.