Raytracing describes a rendering mechanism for displaying a three-dimensional scene on a two-dimensional, rasterized plane like a computer display or any 2D image inside a window. Although this is a very old technique, it's not what is done in modern graphic hardware to produce its images. This is cause it's quite computational expensive. Nevertheless, this method also has its advantages. The initial computations to calculate the scene may be very complex, but advanced features then come for free. This includes shading, reflection, refraction and occlusion. This leads to a point, where raytracing beats other techniques in term of performance, if the scene is very complex. For this reason Intel is researching in hardware-supported raytrace rendering for example.
But how does it work? The principle is simple: we take a point in space to represent our camera or our eye. In front of this point hovers a rcetangular plane out of small raster elements. This is our display or our window or whatever is displaying the picture we want to create. Behind all that is the scene of 3D objects to render. So we need to do:
This creates a simple image of our screen. If we now the point where a light ray has hit an object, we can do further calculations: