Friday, March 24, 2006

Oracle 10g for Solaris x64

Finally Oracle 10g R2 64bit version for Solaris on x64 is released.

Another non-Sun Niagara test

It's good to see another Niagara tests done by people outside Sun - customers. We found Niagara servers to be really good in many workloads, not just www (both comparing to traditional SPARC servers or x86/x64 servers). Here you can find benchmark of T2000.

So, after a week with the Niagara T2000, I’ve managed to find some time to do some more detailed benchmarks, and the results are very impressive. The T2000 is definitely an impressive piece of equipment, it seems very, very capable, and we may very well end up going with the platform for our mirror server. Bottom line, the T2000 was able to handle over 3 times the number of transactions per-second and about 60% more concurrent downloads than the current machine can (a dual Itanium with 32Gb of memory) running identical software. Its advantages were even bigger than that again, when compared to a well-specced x86 machine. Not bad!

Friday, March 17, 2006

My patch integrated in Open Solaris

Finally my patch has been integrated into Open Solaris build 37. I must say that a procedure to integrate a patch into Open Solaris is really easy - just send a request for a sponsor (someone to help you) to request-sponsor list then someone will offer his/her self to be your sponsor and that's it. Of course it would be a good manner to first discuss the problem on related Open Solaris list if it involves new functionality, etc. If it's just simple bug you can skip this part.

What is the patch I wrote about? It adds functionality to ZFS so you can list and import previously destroyed pools. It was really simple but I guess it would be useful. The actual RFE is: 6276934. This should be available in Nevada build 37 and it looks like it will make it into Solaris 10 update 2.

Thursday, March 16, 2006

FMA for Opteron

Some time age I wrote that FMA enhancements for AMD CPUs are integrated into Open Solaris. Thanks to Gavin Maltby here are some details. Really worth reading.

Wednesday, March 15, 2006

The Rock and new servers

The Register has some rumors on new Rock processor:

The Rock processor - due out in 2008 - will have four cores or 16 cores, depending on how you slice the product. By that, we mean that Sun has divided the Rock CPU into four, separate cores each with four processing engines. Each core also has four FGUs (floating point/graphics units). Each processing engine will be able to crank two threads giving you - 4 x 4 x 2 - 32 threads per chip.

Sun appears to have a couple flavors of Rock – Pebble and Boulder. Our information on Pebble is pretty thin, although it appears to be the flavor of Rock meant to sit in one-socket servers. Boulder then powers two-socket, four-socket and eight-socket servers. The servers have been code-named "Supernova" and appear impressive indeed. A two-socket box – with 32 cores – will support up to 128 FB-DIMMs. The eight-socket boxes will support a whopping 512 FB-DIMMs. Sun appears to have some fancy shared memory tricks up its sleeve with this kit.

Monday, March 13, 2006

Ubuntu on Niagara

Well, that was fast. Looks like you can actually boot Linux/Ubuntu on Niagara!
Thanks to extraordinary efforts from David Miller, the Ubuntu SPARC team and the
entire Linux-on-SPARC community, it should now be possible to test out the
complete Ubuntu installer and environment on Niagara machines. As of today, the
unofficial community port of Ubuntu to SPARC should be installable on Niagara,
and we would love to hear reports of success or failure (and love them more if
they come with patches for performance or features :-)).

Thursday, March 02, 2006

2x E6500 on one T2000

In my previous blog entry I wrote that one T2000 (8 core, 1GHz) is approximately about 5-7 times the performance of a single E6500 (12x US-II 400MHz) in our production. Well to get even a better picture how it scales with our applications we created two Zones on the same T2000 but this time we put applications from one E6500 into one zone and applications from another E6500 (the same config) into second zone. Then we put these two zones into real production instead of these two E6500s.

These E6500s during peak hours are overloaded (most of the time 0% of IDLE cpu and dozen threads queued for running, some network packet drops, etc. - you get the idea). Well T2000 with exactly the same production workload is loaded at about 20% peak, no network packet drops, no threads queued. So there's still lot of head-room.

In order to see how T2000 is capable of doing IOs I increased some parameters in our applications so data processing was more aggressive - more nfs traffic and more CPU processing - all in a production with real data and workload. Well, T2000 was reading almost 500Mb/s from nfs servers, writing another 200Mb/s to nfs servers, and communicating with frontend servers with about 260Mb/s. And still no network packet drops, no threads queued up, server was loaded at about 30% peak (CPU). So there's still large head-room. And all of this traffic using internal on-board interfaces. When you add numbers you will get almost 1Gb/s real production traffic.

Unfortunately our T2000 has only 16GB of memory which was a little bit problematic and I couldn't push it even more. I whish I had T2000 with 32GB of ram and 1.2GHz UltraSparcT1 - I could try to consolidate even more gear and try more data processing.

ps. well, we're definitely buying another T2000s and putting them instead of E6500s, E4500s, ...

Applications weren'r recompiled for UltraSparcT1 - we use the same binaries as for E6500 and applications were configured exactly the same. NFS traffic is to really lot of small files with hundreds of threads doing so concurrently, with a lot of meta data manipulation (renaming, removing files, creating new ones, etc.) - so it's no simple sequential reading of big files. On-board GbE NICs were used on T2000. No special tuning was done especially for T2000 - the same tunables as for E6500s (larger TCP buffers, backlog queues, more number of nfs client threads per fs, etc.). Solaris 10 was used.