I’m not going to publish any benchmarks on how apache does just by itself on the T2000. The reason is simple: I don’t have a gigabit switch where the machine is currently located, just a 100 Mbps switch, and the results Colm MacCárthaigh got shows that the T2000 can saturate all four of it’s gigabit interfaces. Instead I’ll concentrate on applications in the near future. Here’s Colm’s results, summarized:
How many requests the machine can handle in a second is probably the most valuable statistic when talking about webserver performance. It’s a direct measure of how many user requests you can handle. Fellow ASF committer, Dan Diephouse, has been producing some interesting stats for requests-per-second for webservices (and they are impressive), however we were more interested in how many plain-old static files the machine could really ship in a hurry. And without further ado, those numbers are;
As you can see, the T2000 was able to sustain about 83,000 concurrent downloads, and my limited dtrace skills tell me that thread-creation at that point seemed to be the main limiting factor, which is hardly surprising. For us, that number represents an upper limit on what the machine could handle when faced with a barrage of clients. Of course, no server should ever be allowed to get into that kind of insane territory, but it’s always good to know that there is plenty of headroom. More to the point, it means that availability at the lower levels of concurrency is much higher.
Overall, the T2000 performs very impressively. At very low numbers of concurrency, it actually has a higher latency than either of the Dell machines we tested, but these latencies are of the order of tens of milliseconds. In other words, the network latency makes a bigger difference in the overall scheme of things.
As I said in an earlier post on the T2000, most datacenters have a limit on how much power one rack cabinet can draw. Here in the Netherlands, the max is mostly set at about 4000 Watts, or about 16 Amps. A few years ago I found out the hard way that (Intel based) computers draw significantly more during powerup – the datacenter I was hosting my own servers at in Amsterdam had a (very rare) power outage, and when power came back, the entire rack of computers tried to boot at the same time, and triggered the 16 Amps surge protector. I had to come in (early on a Sunday, of course) and switch the rack on one by one.
A quick test shows the Pentium 4 machine I’ve been testing with draws about 1.6 Amps, the T2000 about 1. I’ll probably do some performance tests with a Sun 440 as well later on, but I don’t know offhand how much power that machine draws. More, if I recall correctly.
Anyway, you do the math: fill a cabinet with pentium 4’s and see where you max out and fill a cabinet with T2000’s – the T2000 wins hands down with “performance per watt”.
So it won’t just save you a lot on floor space and your energy bill, it may just save you from an early Sunday trip as well…
Nu negerzoenen niet meer mogen, is een campagne tegen blanke vla natuurlijk logisch…
“I say let the prisoners pick the fruits,” said Rep. Dana Rohrabacher of California, one of more than a dozen Republicans who took turns condemning a Senate bill that offers an estimated 11 million illegal immigrants an opportunity for citizenship.
So let’s consider the imagery on this one, shall we?
There are nearly 1.5 million prisoners incarcerated in state and federal prisons across the USA. Of those approximately 45% are African American, even though they make up only 13% of the population.
The proposal here is to deport the undocumented immigrants and replace them with an indentured population which is highly disproportionately African American. And in states like Alabama where African Americans are over 60% of the prison population? Well, it’s going to look an awful lot like 1850 there.
Apparently there is a cross-promotion between The Apprentice and Chevy to “make your own ad”. It’s a good bet that this is NOT what they had in mind.
If you’re wondering why I prefer Google over MSN, check out the search results at MSN for my own name:
update: Okay, so maybe I should have waited a day before posting it and turn it into an April Fools day joke, and regular msn users would have spotted the strange URL anyway, but I had Maarten point out something cool in the real MSN search results for my name. Here’s a screenshot:
The “odd” thing here is that the technorati results, and my postings on the CoolThreads T2000 machine are higher than my work on the anti-spam plugin for WordPress, even ‘though that has been much, much longer on my website. According to my statistics, I still get lots of visitors daily for the anti-spam plugin, and although the T2000 postings are getting lots of hits as well, the ranking of the search results are a surprise to me…
Oh, and if you want to create your own joke MSN search results, go here.
Does praying for a sick person’s recovery do any good?
In the largest scientific test of its kind, heart surgery patients showed no benefit when strangers prayed for their recovery.
And patients who knew they were being prayed for had a slightly higher rate of complications. The researchers could only guess why.
A one-two punch of bleaching from record hot water followed by disease has killed ancient and delicate coral in the biggest loss of reefs scientists have ever seen in Caribbean waters.
Researchers from around the globe are scrambling to figure out the extent of the loss. Early conservative estimates from Puerto Rico and the U.S. Virgin Islands find that about one-third of the coral in official monitoring sites has recently died.
“It’s an unprecedented die-off,” said National Park Service fisheries biologist Jeff Miller, who last week checked 40 stations in the Virgin Islands. “The mortality that we’re seeing now is of the extremely slow-growing reef-building corals. These are corals that are the foundation of the reef … We’re talking colonies that were here when Columbus came by have died in the past three to four months.”
“This is probably a harbinger of things to come,” said John Rollino, the chief scientist for the Bahamian Reef Survey. “The coral bleaching is probably more a symptom of disease — the widespread global environmental degradation — that’s going on.”
Crabbe said evidence of global warming is overwhelming.
“The big problem for coral is the question of whether they can adapt sufficiently quickly to cope with climate change,” Crabbe said. “I think the evidence we have at the moment is: No, they can’t. ”
Howard Kaloogian, a Republican candidate in California’s 50th Congressional District, has removed a picture from his campaign Web site that he claimed was evidence that journalists are distorting how bad conditions are in Iraq. The photo purported to show a placid street scene in downtown Baghdad, including a hand-holding couple in Western dress and shoppers out for a stroll on a cobblestone street in an unmarred business district.
As it turns out, the photo is a genuine street scene—from Istanbul, Turkey.
The state’s largest health insurer systematically — and illegally — cancels coverage retroactively for people who need expensive care, 10 former Blue Cross members claimed in lawsuits filed Monday.
The suits, filed simultaneously in Los Angeles, Orange, Riverside and San Bernardino counties, allege that Blue Cross of California and Blue Cross Life & Health operate a “retroactive review department” devoted to finding ways the company can escape its obligations to members who become seriously sick.
“Blue Cross’ conduct is particularly reprehensible because it was part of a repeated corporate practice and not an isolated occurrence,” according to the suits. The former members seek compensation, damages and court orders prohibiting the alleged practice.
Some 15 year old kids are excellent writers…
Eric Haney, a retired command sergeant major of the U.S. Army, was a founding member of Delta Force, the military’s elite covert counter-terrorist unit.
Q: What’s your assessment of the war in Iraq?
A: Utter debacle. But it had to be from the very first. The reasons were wrong. The reasons of this administration for taking this nation to war were not what they stated. (Army Gen.) Tommy Franks was brow-beaten and … pursued warfare that he knew strategically was wrong in the long term. That’s why he retired immediately afterward. His own staff could tell him what was going to happen afterward.
We have fomented civil war in Iraq. We have probably fomented internecine war in the Muslim world between the Shias and the Sunnis, and I think Bush may well have started the third world war, all for their own personal policies.
Q: What is the cost to our country?
A: For the first thing, our credibility is utterly zero. So we destroyed whatever credibility we had. … And I say “we,” because the American public went along with this. They voted for a second Bush administration out of fear, so fear is what they’re going to have from now on.
After 23 years as Emery County clerk, Bruce Funk will decide this morning whether he will resign because he cannot endorse an election on Utah’s new voting machines.
“In no way could I feel comfortable with these machines,” Funk said Monday. “I don’t want to be part of something that put into question the results that come out of Emery County.”
Earlier Monday, state Elections Director Michael Cragun and other state officials and engineers from Diebold Elections Systems met behind closed doors with the Emery County Commission. Their goal was to address Funk’s concerns about some of the machines’ computer memory that made him suspect they were not new or that something already had been loaded into their memories.
But Diebold told the commissioners that allowing unauthorized people access to the machines had violated their integrity.
It could cost upwards of $40,000 to fly in technicians to retest them.
Diebold’s $40,000 estimate is exaggerated to frighten other clerks from questioning the machines’ integrity, Funk said. “What they are really saying is, ‘We don’t want anyone else to think of doing this.’ ”
If the machines can’t be verified as uncompromised on voting day by an election staffer at a voting location multiple times throughout the day, that’s a huge problem. For the voting commission to accept Diebold’s line that “That’s the way it is.” is simply incredible.
Unmanned aerial vehicles have soared the skies of Afghanistan and Iraq for years, spotting enemy encampments, protecting military bases, and even launching missile attacks against suspected terrorists.
Now UAVs may be landing in the United States.
A House of Representatives panel on Wednesday heard testimony from police agencies that envision using UAVs for everything from border security to domestic surveillance high above American cities. Private companies also hope to use UAVs for tasks such as aerial photography and pipeline monitoring.
“We need additional technology to supplement manned aircraft surveillance and current ground assets to ensure more effective monitoring of United States territory,” Michael Kostelnik, assistant commissioner at Homeland Security’s Customs and Border Protection Bureau, told the House Transportation subcommittee.
Kostelnik was talking about patrolling U.S. borders and ports from altitudes around 12,000 feet, an automated operation that’s currently underway in Arizona. But that’s only the beginning of the potential of surveillance from the sky.
In a scene that could have been inspired by the movie “Minority Report,” one North Carolina county is using a UAV equipped with low-light and infrared cameras to keep watch on its citizens. The aircraft has been dispatched to monitor gatherings of motorcycle riders at the Gaston County fairgrounds from just a few hundred feet in the air–close enough to identify faces–and many more uses, such as the aerial detection of marijuana fields, are planned.
So every police outfit from Bumblefuck, U.S.A. can now buy itself a shiny new toy from the “homeland security” tax/pork dollars. And because there usually aren’t any terrorists anywhere near them, these knuckledraggers end up figuring out a way to chase the usual crowd of inbred drunks around town with it.
What a country.
To show you why the T2000 is an interesting machine for me to look at, I’m going to have to tell you a few things about large websites. I’m going to tell you what the concerns of the people maintaining such a site are like, and how the T2000 fits in.
To do that, I’m going to show you some graphs and numbers. Now don’t come talk to me afterwards and say things like “I notice KPN is doing X and Y” because I’m going to fudge the numbers. The trends and concerns I’m showing you are real, the numbers are not. The real numbers are confidential, of course.
Here’s what an average day looks like for a large Dutch web site:
As you can see, the busy hours correspond to “working hours and after dinner”. No surprise there. If you’re doing a website with an international audience, you’ll probably get a graphs that is 1) much flatter since your audience is not bound to one timezone and 2) reflecting international internet usage, so you’ll get a peak during US peak hours and another during European peal hours, for example.
You’ll see the same pattern in your applications. Here’s the thread count for one of them:
Although those graphs match mostly, if you’d put them on top of each other some differences start to show. That’s because the things people do on your website during day time hours will be different from the things people do on your website during the evening hours. During the daytime, lots of people are using their work computer to browse the net. Anything personal business they have with you, such as bills, or settings and configuration changes on the product they buy from you on a personal basis, will likely wait for the evening browse session. So the actual work will shift around from one application to another, spread over the day. The graph shows you just one application – and if you notice that this small part alone has quite a number of threads, you’ll understand why I’ve been talking about threads so much.
Remember the new logo KPN introduced a little over a week ago? Well, guess where the peak in this graph comes from:
Although that peak is interesting, look at the trend. Our real trend numbers are different and go back more than that, of course, but this particular component of the web site was introduced last summer. But the point I’m trying make is: some of your work load may double in six months, and some of your work load may double in six minutes. Either way, you want to be able to deal with it – by limiting a certain kind of traffic, and/or by allocating resources to things you feel are more important than the things that are getting hammered.
To summarize a few points:
– peak traffic and average traffic are different things. You need to accommodate peak traffic, but up to a point. If you’re able to differentiate your work load, you can allocate resources to parts that need it more, from a business perspective. You want to be able to prevent the views of a new commercial having an influence on what the people see who log on to your website to check their bill. The bills are more important.
– you want to be able to move capacity around – if marketing launches a new campaign that will land you a bunch of new customers for a particular product, you want to be able to allocate extra resources to the part of the application that handles registrations for that new product, for example.
– you want to plan for growth. Lots of growth, and sometimes faster than the business expects.
If you’d buy one large computer as your web server, these things are probably going to be very difficult to do, unless you use virtualization to have that large computer pretend it is a lot of small ones.
What else is a factor? What costs money?
First of all, purchasing the computer. If you do buy a large computer from, for example, Sun, you’re getting a machine that is good at a lot of things, some of which you’re not going to need. A webserver typically doesn’t need blazingly fast disk arrays – a webserver needs to read some files from disk, and write log entries, but the files most requested by the web clients will typically live in memory buffers. A well tuned web site is limited only by the amount of CPU power it has – disks and network should not be the bottleneck. If they are: you need to tune your site. So your database server will probably need those disks, but not your webserver. And if you buy that large server, you are paying for that disk-throughput capacity, because the kind of applications that machine is really built for do need it.
Second: floor space in your datacenter. Well, not just floor space, but increasingly important: power in your datacenter. Most datacenters have a limit on the amount of power they can deliver to one rack, based on electricity and the amount of air conditioning they have. In older datacenters you’re not going to be able to fill a rack with power hungry Pentium 4 machines without running into those limits (sometimes even before you fill half a rack). The computer industry knows this, of course, and that’s why “Performance per Watt” is such a big thing. Sun calls it SWaP, the Space, Watts and Performance metric.
Third: personnel. In your typical datacenter (ours does much, much more for us) you will typically employ one technical guy and a dog (the technical guy will remove broken hardware from the racks and replace it with new hardware, and the dog is there to bite him if he tries anything else), but somebody has to maintain the servers: apply patches, stop and start applications, etc. If you have a large amount of little servers, you have to do something about maintenance, or you’d get a lot of administrators doing nothing else but keeping up to date with patches and stuff. Sun has a lot of software to help out with this, HP has software to maintain their blade servers. It’s an area where a lot of development is being done, specifically to address the points I mentioned early in this posting: shift workloads around, allocate resources dynamically, react swiftly to changing circumstances. I would love to see the tools that Google developed for their server park.
If you combine everything I’ve shown you and said so far, it’s obviously easier to be flexible and dynamic if you have a large set of small resources and the right management tools for them, so you either buy a large machine and do virtualization into lots of small virtual servers, or you buy small machines, and you manage them in groups/clusters or whatever you want to call it.
And once you know how well a certain machine does the kind of work you want it to do, it becomes a simple spreadsheet calculation to find which set of machines gives you the most bang for the buck, where that buck includes purchase, power/space, maintenance personnel. And that allows you to compare wildly different beasts as well – for example you can compare a few Sun Fire V1280’s, a larger set of Sun Fire V400, a big set of Sun Fire T2000 CoolThreads or a really big set of HP Blades, even although they are wildly different products.
Our Leader’s been complaining about the lack of good news being reported out of Iraq. He has a good point. As I surf around the Francosphere, I see Juan Cole writing about 69 Iraqis killed yesterday, Lafayette reporting that the Iraqis are upset about a church massacre, and Allbritton writing about death squads.
The direct repair of tractors is currently underway; the program will return to service at least 5,000 tractors. To date, 1,437 tractors have been repaired in workshops located around Iraq.
Repairing 1,437 tractors might not seem like as big a deal as death squads and mosque massacres, but you have to put it in its proper perspective. The old Soviet version of Pravda was big on writing good news stories about tractor production. They would have killed for a story about 1,437 repaired tractors. It looks like 1000 was the best they could do during wartime:
The personnel of the plant [M. I. Kalinin Tractor Plant, in Rubtovsk] mastered the production of the ATZ-NATI (Altair Tractor Plant Institute of Motor and Tractor Scientific Research) caterpillar tractor in record time and produced the first thousand tractors as early as December 1943.
Sure, the M. I. Kalinin Tractor Plant was building them from scratch and we’re only repairing them, but they were fighting something called the Great Patriotic War while we’re engaged in the ultimate struggle for world freedom. We don’t have time to build tractors, but I bet our repaired ones can outplow the crap out of the commie ones.
Devin Haskin isn’t the first little boy to find the inside of a toy machine too enticing to resist.
When the 3-year-old Austin, Minn., boy crawled through the discharge chute of a Toy Chest claw machine at a Godfather’s Pizza in his hometown, he ended up on the other side of the glass surrounded by stuffed animals.
Rescuers had to pry the door open to get Devin out, though the boy was in no hurry to leave.
“When we got it open, he didn’t want to come out,” Fire Chief Dan Wilson said Tuesday. “One of my firefighters had to reach inside and get him. He was happy in there.”
Do you have an iPod?
No, I do not. Nor do my children. My children — in many dimensions they’re as poorly behaved as many other children, but at least on this dimension I’ve got my kids brainwashed: You don’t use Google, and you don’t use an iPod.
— Steve Balmer, Microsoft
Steve, if the way to get your own kids to use your products is to brainwash them instead of simply having a better product, you’re in trouble.
In a stunning move, BAPCo, the industry-standard Windows benchmarking consortium, announced that Apple Computer has joined up as a member. BAPCo is responsible for the SYSmark 2004SE and MobileMark benchmark suites we use at PC Magazine Labs for testing PCs. BAPCo also produces the webserver test WEBmark.
BAPCo members include AMD, Intel, Transmeta, ATI, nVidia, Microsoft, Ziff Davis Media, CNET, Dell, HP, Toshiba, Seagate, VNU, Atheros, and ARCintuition. These heavy hitters cooperate on determining and developing testing methodologies, using industry standard programs like Microsoft Office, Adobe Creative Suite, and 3ds max. The SYSmark and MobileMark benchmarks are used as performance tests by media outlets, corporations, and government agencies worldwide.
This is significant because it means that Apple has now committed to Windows-based performance testing, and it will influence industry-standard testing methodologies going forward, possibly including Mac OS X testing. We speculate that Apple will now develop Windows drivers for Intel Macs like the iMac, Mac mini and MacBook Pro with Intel Core Duo processors. You will probably still need to buy your own copy of Windows XP (or Vista), but this is exciting stuff.
We’ve seen rumors that Apple will include virtualization technology in Mac Os X 10.5 (Leopard), but benchmarks like SYSmark and MobileMark don’t work well in virtualized environments since they use ultilities that call low level processes (like anti-virus). This bodes well for native Windows support on Macs in the future.
When the Supreme Court ruled in the Grokster case, they laid down a very specific case for when a service provider might be liable for the actions of its users. That was only if the service provider took “affirmative steps” to induce copyright violations. This seemed odd and likely to cause trouble pretty quickly. It basically suggested that a new company that came along and did exactly what Grokster had done, but avoided proactively encouraging people to download unauthorized material, would be perfectly fine. However, the entertainment industry immediately tried to expand what the decision meant and eventually just pretended the Supreme Court said that file sharing and things like torrent tracking sites were illegal — when it actually said nothing of the sort. The MPAA recently went after a bunch of BitTorrent search engines — which seemed to stretch the Supreme Court ruling again. After all, these are just search engines, and there are tons of legitimate uses for them. At least one is now fighting back. TorrentSpy has filed a motion to dismiss the case, noting that they don’t promote any kind of infringement and they don’t host or link directly to any files copyrighted by the MPAA. In other words, they’re making a case that all they are is a search engine for torrents, and if the industry is worried about people putting up torrents that infringe on copyrights, it should go after those actually responsible, rather than the search engines.
Even if your salary is $1 a year, the taxman still finds a way to get you.
Apple Computer chief executive Steven Jobs has had to give up nearly half of 10 million restricted share options in the iPod and Mac maker that vested this month to satisfy the IRS.
Apple withheld 4.3 million of the shares and sold them at $64.66 a share to raise $295.7 million owed to the IRS, according to filing with the Securities and Exchange Commission.
That still leaves Jobs with 5.4 million shares, worth $323 million at Monday’s closing price.
Sinds de komst van het megadoek tegenover station Leiden Centraal werd er al druk gespeculeerd over de sterkte van het scherm om de bouwput. De sceptici hebben gelijk gekregen. Het 180.000 euro kostende doek is vanmiddag uit elkaar gescheurd. Of, zoals CDA-fractievoorzitter De Haan het stelt: “Het schaamscherm heeft het eerste voorjaarswindje niet overleefd. Ik hoop dat Hillebrand het bonnetje bewaard heeft.”
Promoted from the comments section:
Out of curiosity, what is the max % of CPU usage reported for a thread that’s hammering away at a compile? (In the screenshot the max is 2.8% but it’s sorted and I imagine some scrolled off the screen.)
For single-thread processes I’ve seen it hover around 2.8 to 3%. For multi-thread processes I’ve seen larger numbers of course. I’m not too sure that exact number is very valuable in the larger scheme of things. I’ve hammered apache on it yesterday, on a copy of my weblog (if you go to the machine URL, you’ll see this weblog with an older copy of the postings database), and experimented a bit with the number of threads requesting pages, and the number of preforked apache processes. Sun recommends using the prefork model, but I’m also going to try the mpm model we’re using on the other Suns since I didn’t really like the behavious apache was showing – it rapidly grew to using 4.5 Gb of memory, which might be a problem for us, since we’ll also be running a few java virtual machines on it with between 2 and 3 Gb of memory allocated to each of them. In the prefork configuration it served between 100 and 400 weblog pages per second without breaking a sweat depending on how I put a load on it.
Anyway, coming back to your question, the /server-status page in apache said apache was using between 400 and 950% CPU, which is a different number than you’d get by adding all the 2.8% and 3% processes and multiplying that by 32. In the screenshot you saw mostly 2.8%, and not the “theoretical” 3.1 % if you divide 100% by 32 CPU’s, and I think that’s because the cc processes were fairly short-lived, just compiling one file. If you run a single-thread task and let it live longer, you’ll see prstat reports of 3 and 3.1%.
A sports utility vehicle is stuck in a sinkhole in the Brooklyn section of New York after a water main break caused the street to give way Monday, March 27, 2006. The driver of the vehicle was not seriously injured, according to the fire department. (AP Photo/Shiho Fukada)
The federal government is making public a huge trove of documents seized during the invasion of Iraq, posting them on the Internet in a step that is at once a nod to the Web’s power and an admission that U.S. intelligence resources are overloaded.
Republican leaders in Congress pushed for the release, which was first proposed by conservative commentators and bloggers hoping to find evidence about the fate of Iraq’s nuclear, chemical and biological weapons programs, or possible links to terror groups.
Web surfers have begun posting translations and comments, digging through the documents with gusto. The idea of the government’s turning over a massive database to volunteers is revolutionary — and not only to them.
With a Mac in an office that is based on Microsoft products, you sometimes have to, ehm, compromise. Mail for example, if the administrators decide to turn off IMAP and POP3, is something you learn to read through a web interface.
Since yesterday, I’ve got the following warning in my webmail:
If you see this, you’d expect to be able to change your password, right?
Wrong. Not possible through the webmail interface. In 9 days I’m going to find out if Outlook webmail includes a way to change expired passwords.