Now that we have our window, we want to start filling it with some content. For this, we will be using a subset of the Windows API named Windows GDI. GDI stands for Graphics Device Interface, and consists of a variety of functions for drawing graphics and formatted text onto a surface – either a window on the screen, or a printer. We will probably not be drawing much to the printer though.

Check out the commit named “Introduction: Drawing graphics” in the GitHub repository to see the complete code for this post.

First of all, let us move the drawing code out of WindowProc. We want to keep the window procedure as small and easy to read as possible, so whenever we implement an event handler longer than a handful of lines, we will probably want to isolate it into its own function that we can call from the window procedure.

      case WM_PAINT:
         DrawToWindow(windowHandle);
         return 0;

Inside our DrawToWindow() function, we still need to start out by retrieving the device context and the client rectangle before we draw the white background. This is no different from what we have done before. We actually did use a GDI function in the previous version, namely FillRect(), and we will still be using that to draw the background for our window.

Now, let’s try to draw a couple of shapes. There are several different functions we could use for this, but let’s make it simple and draw a square and an ellipse. For these shapes we can use the Rectangle() and Ellipse() functions, respectively.

   // Draw outlined shapes
   Rectangle(deviceContext, 20, 20, 150, 150);
   Ellipse(deviceContext, 180, 20, 400, 150);

This will simply draw a rectangle and an ellipse at the given coordinates. As is the case with most (if not all) GDI functions, the first parameter is the device context. The subsequent numerical ones specify the left, top, right and bottom of the shape, in that order.

The fact that we are doing all our drawing as a response to the WM_PAINT message means that our graphics will still be visible after the window is resized or brought back to the front after being hidden behind some other window. If we had only drawn our graphics once when the window was created, without updating it whenever an update is requested, we would get a completely different result.

Now, let’s try to add some color and variation to our shapes. For this we will use a GDI concept called brushes, which were briefly mentioned in the previous post. A brush allows us to fill a region with a pattern or a solid color. We will try both.

To create a solid color brush, we use the API function CreateSolidBrush(), passing in a COLORREF that describes the color we want. Let’s choose red for our first shape.

HBRUSH redBrush = CreateSolidBrush(RGB(255, 0, 0));

The RGB macro represents exactly what its name suggests – it’s a way to specify a color using separate red, green and blue components. Each component is given as an unsigned 8-bit integer, meaning it can range from 0 to 255. For our red color, we want to set the red component to maximum while keeping the green and the blue components to zero.

OK, so we have our brush. Now what? Well, we also need to decide upon a position and size for our shape. For this we will use our old friend, the RECT structure. When we have defined our rectangle, we can draw it using FillRect(), the same function that we used to draw the window background.

   RECT redRectangle{ 20, 170, 150, 300 };
   FillRect(deviceContext, &redRectangle, redBrush);

An important thing to be aware of is that whenever we create a GDI object using CreateSolidBrush() or any other function starting with “Create”, we are allocating a resource. And when we allocate a resource, we must also be careful to release it when it’s no longer needed. Otherwise there will be a memory leak, which we definitely don’t want. To release our GDI objects, we use the DeleteObject() API function.

DeleteObject(redBrush);

A good way to ensure that objects are released correctly is to encapsulate them into classes. We will not do that here, as this project is just for demonstrating the concepts, but at a later point when we are trying to write an actual game application, we will definitely pay more attention to good software design practices.

While we’re at it, let’s also try to draw a shape filled with a pattern. One way to do this is to use a hatch brush. To create a hatch brush, we call the API function CreateHatchBrush(). It takes two parameters, namely the hatch style and the color. We will try the “diagonal cross” pattern.

HBRUSH blueBrush = CreateHatchBrush(HS_DIAGCROSS, RGB(0, 0, 255));

This time, instead of a rectangle, we will draw an ellipse. However as we already know, the Ellipse() function doesn’t have a parameter that specifies which brush to use, so we need to do things in a slightly different way. Many of the GDI functions rely on GDI objects being “selected into” the device context beforehand, and this is what we need to do here. We will use the SelectObject() function for this, passing in handles to the device context and the object we want to select – in this case the newly created brush.

HBRUSH oldBrush = static_cast<HBRUSH>(SelectObject(deviceContext, blueBrush));

Note that we are catching and storing the return value of the function. The SelectObject() function will return a handle to the GDI object that we are replacing. There is a very specific reason why we need this handle.

As we already know, the brush we created will at some point be deleted. However we should never delete an object that is currently selected into a device context, because doing so can lead to unexpected behavior that can be very hard to explain and troubleshoot. The way to avoid this is simply to select the default object back into the device context once we are done drawing with our own object.

Knowing this, we can draw our ellipse in a safe way by calling Ellipse(), then select the default object back into the device context before we finally delete our brush.

   Ellipse(deviceContext, 180, 170, 400, 300);
   SelectObject(deviceContext, oldBrush);
   DeleteObject(blueBrush);

Now we have our red rectangle, and an ellipse filled with a diagonal cross hatch. Note that unlike the other shapes, the rectangle does not have a border. If we wanted a border, we could have used the Rectangle() function instead of FillRect(), but then we would have to select our brush into the device context just like we did for our ellipse.

Now we know how to draw shapes into our window. But what if we wanted to display some text? Well, now that we have some basic knowledge of how to draw stuff using GDI, displaying text is easy peasy.

First of all, we obviously need to decide what text we want to display. Just as an excuse for introducing a couple more API functions, I want to retrieve the text from the window title. To do that, we first need to know how long the text is so we can allocate enough space for it. This is done by calling the GetWindowTextLength() function. Afterwards we can retrieve the actual text by calling GetWindowText().

   size_t titleLength = GetWindowTextLength(windowHandle);
   std::vector<char> windowTitle(titleLength + 1);
   GetWindowText(windowHandle, windowTitle.data(), static_cast<int>(titleLength + 1));

Note that we are allocating space for one additional character. This is because the number of characters we get from GetWindowTextLength() does not include a terminating null character, so we need to allocate extra space for that.

The next thing we need to decide is which font to use. Creating a font follows much the same pattern as creating a brush. We create it using the CreateFont() function, then we select it into the device context using SelectObject(). When we are done using our font, we delete it (of course, remembering to select the default font back in to the device context first) using DeleteObject().

The CreateFont() function has a long list of parameters that describe the various properties of the font we want to create, such as the font family, the size, the weight and so on. I will not describe them in detail here.

   HFONT textFont = CreateFont(32, 20, 0, 0, FW_REGULAR, FALSE, FALSE, FALSE, ANSI_CHARSET, OUT_TT_PRECIS,
      CLIP_DEFAULT_PRECIS, CLEARTYPE_QUALITY, DEFAULT_PITCH, "Arial");

Now that we have our font, we can select it into the device context, making sure to save a handle to the default font.

HFONT oldFont = static_cast<HFONT>(SelectObject(deviceContext, textFont));

Next, we need to find out how much screen space we need for displaying our text string. This will of course depend on both the text itself and the font. We will be using the DrawText() function to draw our text, but this function needs us to pass in a RECT structure that defines the rectangle in which the text should be displayed. So we need to somehow figure out how large this rectangle needs to be.

It turns out that DrawText() has a special mode that can help us with this. By passing the DT_CALCRECT flag, the function will not actually draw the text, but instead calculate the required screen size and write it to the RECT structure that we passed in.

   RECT textLocalRect;
   DrawText(deviceContext, windowTitle.data(), static_cast<int>(windowTitle.size()), &textLocalRect, DT_CALCRECT);

Now, to set up the actual rectangle we want to draw our text in, we select a position in the window, and use the values we got from DrawText() to determine the width and height.

   int textWidth = textLocalRect.right - textLocalRect.left;
   int textHeight = textLocalRect.bottom - textLocalRect.top;

   RECT textDrawRect;
   textDrawRect.left = 20;
   textDrawRect.right = 20 + textWidth;
   textDrawRect.top = 450;
   textDrawRect.bottom = 450 + textHeight;

Now we have all the information we need to draw our string. That means we can pass all the information to DrawText() before we do the final cleanup.

   DrawText(deviceContext, windowTitle.data(), static_cast<int>(windowTitle.size()), &textDrawRect, DT_CENTER);

   SelectObject(deviceContext, oldFont);
   DeleteObject(textFont);

That’s it – we now have text in our window!

We have now gained some insight into how graphics are drawn in the Windows way. However, interacting with the screen in this way is far from optimal in terms of performance, because there is a very thick layer of abstraction between the application and the graphics hardware.

In the old days of MS-DOS, this abstraction would not be present, and game developers would just obtain a pointer to video memory and write pixel data directly into it. In many cases the drawing code, or at least parts of it, would be written in assembly code because that was the only way to make it fast enough.

Hardware capabilities have obviously changed dramatically since then, and writing directly to video memory is no longer a viable way of displaying graphics to the screen. In modern days with hardware accelerated graphics, we have very different ways to optimize our graphics applications. In this blog we will stick with GDI-based drawing techniques for a while, but at some point we will move on to an API that supports hardware acceleration. When we go from 2D to 3D, that will be an absolute necessity.

In the next post, we will try to move our shapes around to create a simple animation. Stay tuned!

By alfred

Leave a Reply

Your email address will not be published. Required fields are marked *