How many servers you can fit in a rack depends on numerous factors.
A) Server size. We have a couple storage servers that are 4u/server (so you could only fit 10 servers/rack). We also have some that are 8 physical servers in a 3u chassis (which would give you 112 servers/rack) Most servers are 1u/server (so 42/rack).
B) Power - What is the power limit per rack that your data center can support? Servers are no good if you don't have enough amperage to power them.
C) Switch density - Will each server have one network connection, or multiple? How many switches will you need in each rack?
D) Heat - The higher the density, the more heat you are generating in that area. Can your DC support this? Will you need additional cooling?
E) Other equipment - Having 42u of space, does not necessarily mean you have 42u of space for servers. Don't forget about the space taken up by switches, firewalls, KVM, UPS, PDU's, etc.
A standard rack unit (1U) is 1.75" in vertical height, the actual number of units available in a 'full height' rack varies but typically 42U to 45U, or even 47U of empty slots are present - though your provider will likely steal away at least one for the patch panel delivering your connection. Rack mounted servers are designed to fit a certain number of these 'U' such as 1U, 2U, 3U, 4U and so on, which indicates the space they will take once fitted in place (a 1/2U is a short form chassis that can theoretically be mounted 2 per '1U' back-to-back).
Then, we get down to what you can put in, which depends on a few factors including rack depth (e.g. a deep server chassis wont fit a shallow depth rack so check that first), floor strength (as relates to max loading), the Maximum Wattage / Power Density of the DC zone your cabinet will be in (in effect the ability to supply power in, and shift the heat out), and specifically the power supply to your cabinet (usually given in Amps / kVA) and the vertical height of each server (its number of 'U').
It seems broadly accepted (by which I mean recommended) by the main cabinet manufacturers / DC fabricators, etc. not to leave blank spaces between servers (fill them with blanks if they appear), as this interferes with your cooling flow (although you do always hear the odd story that suggests otherwise)... which again is related to the Maximum Wattage of the cabinet... and relates to the HVAC arrangement (CACS/HACS/room-based cooling, etc)... a lot of factors to consider... but as a rule of thumb, nobody should expect to get all their available space filled as one limit or another will be reached... (and yes, there are those that double bank servers at 1/2 U, but thats a whole other story....).
In a well made datacentre, on the balance of probabilities, expect to see your available power supply be the limiting factor though...
We find that normally with our power usage we realistically fit around 25-30 servers per rack. That way we have flexibility in power and we are not running risk of tripping any breakers. We also noticed a big difference when we moved from HDD's to SSD drives. Our power usage did decrease as well.
42 server in 1U rack agreed but you have some issue in this case you may power problem wiring may become complicated and result that it become a lengthy one so just keep in mind all this problem and go with some few you can go with 35 to 36 if you are planning for new
I'm not so sure that modern, well configured servers will take a _LOT_ of power... older machines yes, obviously (which is a problem we have often seen for people who like to buy bargain priced retired stock, and why we don't do co-location for them) - there is a reason they are cheap, often the power they eat adds up over a year or two to more than they are worth the trouble of. I remember seeing someone take a Dell R900 and plug in a couple of MD3000 disk enclosures, and the combined draw was the full cabinet allowance in just ~10U (a quarter cabinet). Sadly, the days of cheap electric are long gone...
But by the flip-side of that coin, for example our Xeon E3-1245 v6 machines, with DDR-4 RAM and SSD drives run at about 0.2 amps draw (204-240v) in low use and 0.3~ amps at full load. If you average that out across a cabinet (especially as we have dual 32 Amp circuits per cabinet), you start to realise that heat dissipation and space limitation are more an issue for the likes of us... which is why ISP's tend to like blade servers for high-density hosting, as they use a lot less space physically for their boxes and for any associated cabling / support items (they can even have the switches embedded on some designs!).... so everyone's mileage will vary, and frankly as always in life, you get what you pay for.
The big thing people often ignore is the Uplink to the data center core. If you only have a single 100Mbps or 1000Mbps to the core, while you maybe able to get 40+ servers in your rack, your network performance is going to be terrible. Its recommended in "today's" internet, that you have a min of a 10Gig uplink connection, if not more, if you plan on filling the entire rack.