I'll have to define something before I explain anything. Subnets. As I'm sure you've realized already, the internet is a huge place. You could spend all your time just going through different hosts and never run out of them. As such, you can imagine finding one particular host if you don't know it would be like searching for a needle in a haystack.
Before the internet even existed, engineers already faced this problem and designed networking to be divided into smaller networks. These subnets each share a common IP part, and this common part is what the mask described by ponkan defines. You might find
this and
that helpful.
Broadcast address the theoretical IP that represents "All hosts" in the subnet. Pinging this address should, in essence, ping every computer on its subnet class. It isn't used very often, however, and doesn't matter much.
Now how do we put together all these subnets into one huge, easily accessible network, whether it be Internet or other? The answer is with routers, machines dedicated to transferring all data from one host in a subnet to an host in another. The Internet, with its millions of different networks, makes large use of those, and a simple page request will often go through dozens of routers before getting to the server and back to your client. Each time you connect through another computer on the network, it will go through those routers, each step of this route being called a "hop".
The question is, who provides all these routers that people will use to send and retrieve data all around the world? The average user can't decide to just plug her computer into the wall and have instant access to all the subnets. Nor can she make her own router to link her subnet to the world's. The sheer amount of data that passes through Internet routers makes for extremely expensive infrastructure, both connection-wise, and computer-wise. The end user can't just afford a CISCO router with an OC48 connection to go with. Nor does she need that much bandwidth.
This is where ISPs (Internet Service Providers) come in. Essentially, ISPs build their own sets of high-bandwidth, high-cost routers to provide their service to the users, and link them to the routers through simpler connections, namely phone lines or cable.
Now for the default gateway. The default gateway is the first router through which the client will first try to route connections that aren't in the same subnet. Indeed, a given host in a large network could have access to many different routers at any given time, and not all routers will be able to reach the host being contacted. The default gateway is just this, the first hop the client will attempt to reach its host.
In most cases, when the user connects her computer directly to the ISP's modem (whatever it is), this address will be the aforementioned ISP router. But what if the user has more than one computer? Would she need to get multiple lines, one for each? Not very efficient in most cases...
Why not simply add a subnet for the user only? Not many users will have more than half a dozen computers at any given time, so they don't need costy routers and high-bandwidth connections to go with. This is where home routers come in. The home router acts exactly like any other router -- it'll link the computer's network to the Internet's, so all clients can route their requests on the same connection. In this case, the default gateway would be the user's router, instead of the ISP's, as the clients will hop through the home router before reaching the actual ISP router on the other end.
Are you still following me? Because I need to add one last nuance. As was stated earlier, the Internet is immense, with millions of computers accessible on it. Aside from considerations of subnets, each computer needs something to identify it, so it can be identified and data routed to it... Many different ways were devised to do this, but the winner was IP (Internet Protocol), which identifies a computer with a 32-bit value divided in four bytes. I'll skip the rest of the details, as they were mentioned in more or less details already.
Now, 32 bits means 4294967296 possible IPs... And if we want our computers to be identifiable, these IPs need to be unique and static, else we'd have a lot of trouble trying to find the same computer twice. For the sake of easier management, these IP possibilities were divided into smaller groups, which often happen to be the aforementioned subnets. ISPs and various other companies will reserve IP ranges to then assign their own IPs in these ranges to their computers.
But with the exponential growth of the Internet, a clear problem arose -- four billion IPs is a lot, but it may not be enough as there are many more computers than there are users. If we were to give an unique IP to each machine, we'd quickly run out. For the sake of economy and simplicity, it was decided that some machines did not need to have the same, unique IP assigned. For example, the average internet user doesn't need their computer to be easily found at the same address -- she only wants to easily find other computers. And what if the computers on a given subnet change often, like laptops? Network admins would spend every waking hour assigning and removing addresses to computers.
To mend this, DHCP (Dynamic Host Configuration Protocol) is used among various ISPs and companies. Instead of always giving the same address to a given computer, a wide pool of possible addresses is made. Each time a computer connects to its ISP/network, the DHCP servers check to see what IPs are taken and what IPs are free in this pool and assign one of those free addresses to the new connection. (There are actually a lot more variables in play in assigning a DHCP address, but I'll spare you the gory details.)
This is all fine and well, but it doesn't fix the limited IP problem. If anything, it makes it worse, as ISPs and companies will need to reserve larger IP ranges just in case their network grows beyond what it already has. Giving a publically recognizable IP to each network would make demand explode beyond the 32-bit limit.
The IP engineers foresaw this, however, and declared at its very beginning that some IP ranges would be considered "private". For example, why would a home user and his four workstations, or a business company and its 1000 employees need to have unique IP addresses for each machine in the subnet? Again, we only want the machines to be able to reach others, not the other way around. Therefore, "private" addresses are completely transparent, and only used within the subnet. As soon as a connection crosses this subnet's gateway, it will be seen as the "public" IP, that of the router's.
Therefore, 99% of home routers, and even a few ISPs will use private IPs within their networks, preventing some of the excess IP demand. Private IPs have advantages as well as disadvantages... The main advantage: you can't access the private computers unless you specifically make it possible. This makes for easy and simple security. And the main disadvantage: you can't access the private computers unless you specifically make it possible. So it complicates things if you want to have a file server accessible from the Internet, or simply host a netgame!
And last, you can easily identify a private address as it will be within these ranges: 10.*.*.*, 127.*.*.*, 172.16.*.* and, the most widespread, 192.168.0.*.
Whew, I went a bit more indepth than I expected... And that's only the icing on the cake. There's a lot of other systems devised to enhance this huge network, from DNS to IPv6... And I haven't even mentioned the hardware part. There's a ton of in-depth HOWTOs out there, but I tried to sum it up here. This crash course should be enough to let you understand a good part of your home network. Now once I'm done eating, I'll skip over to your other topic to describe iptables and what it can do for you.