Choose an algorithm (see section on paper titles that will get you accepted at conference).
Choose the data set.
Get several graduate students to do it for you separately ; pick the best results.
Tune your algorithm by selecting the parameters on the test set. [Add a little noise if need be to make it look less obvious].
Use a data set that has not been used before so that your algorithm will be the best.
Just do toy problems to save time (call them “illustrations”).
If you are lucky enough to work in information retrieval carefully select your target concept out of the wide range available.
Always interpret your results as “comparing favourably with the state of the art”.
If the results are are quite poor, stress they are only preliminary.
Never produce statistically significant experiments.
If you do run multiple experiments, do not quote error bars; simply quote the best result obtain – people are more interested in the best.
Redefine the performance measure until your method looks good.
Do not present your data graphically; it is much better to use a very large table with numbers. Do not truncate to a couple of digits of precision.
Exploit monotonic transformations of the results to make your data look better (logarithms are particularly useful here).
Ensure you omit essential details to prevent competitors being able to reproduce your results.
Make your code available but make sure it is buggy and only compiles on exotic compilers that nobody else has access to.
Instead of conducting a complete experiment, do a partial experiment, and then derive the optimal result from the partial experiment. This is great for variants of error-correcting output codes or other systems which solve many sub-problems.
Use statistics whose assumptions are manifestly false to derive statements of excessive certainty. For example, it is popular to assume that cross validated error is independent.
When you can not experimentally demonstrate the effect your paper is about, include good experiments that illustrate some other point.
Conduct carefully designed, statistically well founded and significant, experiments that are fully and carefully documented sufficiently well for them to be readily reproduced by other researchers.