Jump to content
Sign in to follow this  
gio10469

Intel vs AMD, etc...

19 posts in this topic Last Reply

Recommended Posts

So recently I decided to build my own desktop PC as I'm starting to realize that my Dell XPS m1730 is slowly showing that it's future may be uncertain. I'm looking to build a machine that its in the range of $1500 to $2500, though I'm not too concerned with this because I plan on building this piece by piece over a period of time, gradually making upgrades.

Well to get to the point, I've noticed in researching CPUs that theres a lot going on in the market right now. Theres many decisions to make such as AMD or Intel? If I go the Intel route, I'm not sure of what socket I should invest in. Theres the 1366 which is very popular but is soon to be discontinued and then theres the 1155 which I understand is one of the new platforms for Intel's upcoming chips. But I have also read that Intel plans to release socket 2011 sometime this year, so should I wait around for that? Basically I want to know which socket is best so if I ever want to upgrade my CPU for some reason in the future without changing my motherboard. 

So can any computer buffs point me in the right direction? It's been so long since I've built a system and I'm a bit rusty on what to look for these days since technology changes so rapidly.

PS: Please don't get on my case if any of my info is wrong, just politely correct me please 4.gif...it's been awhile and I'm not up to date. Also I know nothing about AMD so please provide any suggestions heading in that route as well.

Share this post


Link to post
Share on other sites

Both of my last two machines have been AMD.  An interesting story I heard somewhere, probably at my supplier, is that Intel has licensed AMD's technology.

In the past, Intel used RISC processors with the X86 decor overlaid on them in firmware, and AMD built CISC machines.  I think that is likely still true.  Corporations don't change direction very quickly, and licensing the AMD stuff probably gave Intel "hardware" segmentation.

I don't build machines, but I hunt for a good commercial build.  My current machine, now three months old, is a Gigabyte GA-MA78LM-S2H with an Athlon II X2 processor (I couldn't afford a Phenom system).  So I am definitely in the AMD camp.  AMD has a relationship with ATI (ownership, I think) and this mobo is ATI/AMD based.  It has an AMD/ATI 760G [Radeon 3000] Graphics controller which seems to have everything, being both VGA and HD.  This mobo has the correct socket for AMD3 chips when they are available.  The newer chips have six cores.  So if you are planning to do 3D CGI development, you might want one of those.

This baby cost me $500 Cdn without any operating software.  It is $699 with Windows 7 OEM included.  Since I use Linux, I didn't need the bloat ware.  Of course this is the naked box, but it did include all the usual stuff including 12 USB ports,  a 500 GB 7200 r.p.m. SATA hard disk, and a very nice blue-ray cd/dvd-rw.  There are two extra bays for rotating storage and several expansion slots.  Maximum memory on this guy seems to be 16GB (Has two memory slots capable of supporting 8GB DDR2s.  I have two 2 GB sticks, which according to the manual gives dual fetch so my memory bandwidth is 128-bits.  The rest is 64-bits and it runs like a scalded cat compared to my old single core Athlon XP 2400.

For the money, I think you might be better off going for something like this and getting some HD peripherals.  This machine can drive HD video and HD sound systems.  If you go for the Phenom processors or the AMD3, you can get your processors in ganged mode, which I understand as being a really nice sort of deep pipeline for instructions and so on. 

This machine has a fancier BIOS than I have ever seen, and it seems very, very capable.  I really haven't had the termerity to explore it much, but if you want you can set up overclocking from the BIOS set up, and do other neat stuff.  I am on default settings and it is good enough for me.  SC4 loads under wine in six seconds.

Share this post


Link to post
Share on other sites

I sit in the AMD camp as well and I am also looking at building a system in the near future for a similar budget.

There are 2 main draws towards AMD for me; one is the price difference which the top AMD chip is around $400 at the moment while the Intel top chip is $700 and they have comparable performance with each edging out the other on a couple different points; the second is that I have found AMD to be more durable than Intel in the long term and when my CPU is running warmer.

The motherboard I was most recently looking at for the AMD series has something new and fangled on it that was called the cross-series or something such where you are able to plug in 2 ATI graphics cards to upgrade the performance. Seemed neat and might fit in well to your upgrade to awesomeness over time plan.

Share this post


Link to post
Share on other sites

Despite having friends who work for Intel, I am an AMD diehard, and have been since having two Pentium IIIs inexplicably fail on me over the course of two years, back in the early 00s. 

With Intel, you're paying for the name, largely.  While they've probably gotten better after the dreck that was the Pentium III, I've generally found AMD to produce a less expensive, more powerful and more reliable product.  You really can't go wrong with that.

On a somewhat related tangent, I would avoid nVidia like the plague.  My previous laptop before my current Toshiba was an HP that had an nForce motherboard in it--part of a batch of motherboards, which, as I found out later, nVidia had sold HP knowing they were defective!

The motherboard started suffering from severe overheating and the other components in there got cooked one by one after about a year . . . boot sector on the hard drive got fried after 14 months (first hard drive problem I've ever really had).  Then the USB ports started failing, and the sound card decided to go out right while I was in the midst of trying to finish up my composition portfolio for my Ph.D. Had to replace the whole computer after only having it 17 months.

-Alex (Tarkus)

Share this post


Link to post
Share on other sites
  • Original Poster
  • Hmmm these are all very good suggestions but I'm still torn between Intel and AMD. AMD does offer very good budget CPUs but I was reading in multiple places that Intel chips hold the advantage over AMD chips in that they have more advanced hyperthreading which allows 4 threads to function as 8. I was looking into a lot of the new Sandy Bridge hype and I saw an ASUS motherboard built for Intels new line that could run triple and even quadruple SLI with multiple video cards. I also read that AMD should be releasing its new line of chips (codenamed Bulldozer I believe) by the end of this month so I think it might be best to patiently wait for those to be released to see if the AMD counterattack can win me over. Thank you all for your input!

    @Tarkus: I totally agree with you with nVidia! I'm currently using a Dell XPS gaming laptop with one of their video cards and definitely does not live up to its hype. My laptop was listed under that court settlement with the faulty cards but they only offered to replace cards that had already failed. My laptop is slowly dying out piece by piece which is why I decided to start building this desktop, since I still require a powerful computer and this one just isn't cutting it anymore. I don't think high performance laptops are a worthy investment at all...at least not yet. I don't believe laptop companies have successfully thought of an adequate way to keep these laptops well cooled. Unfortunately I learned this the hard way.

    Share this post


    Link to post
    Share on other sites

    Live and learn.  Laptops are the bane of the computer world.  They are restricted in expansion, and have none of the good things of a modern notepad.  I think their days are numbered.

    Don't be hyped by the Intel bumpf on hyperthreading.  You can't hyperthread anything if the program isn't written to spawn (fork) separate processes that will use it.  Think about the software you are going to run, not the features of the hardware.  The hardware is a vehicle that supports your applications, and should not be an end, but only a means.

    AMD generally lags Intel on fancy toys, but generally they get it right the first time when the bring it to market.  They are now in bed with ATI, and I have one of their new fusion chips which, unfortunately, takes a hunk out of my RAM, but I have 4GB, so what the hey.

    The only intense game I run is SC4, and within its limitations it runs well.  My late wife's pentium couldn't run SC4 properly at all under Linux because the graphics gear was so old, Linux no longer supported it.  You do have to watch that one with Linux, as they don't support stuff forever.

    My biggest argument with Intel is that they are overpriced for what you get.  You don't want to pay for the logo, you want performance.

    Share this post


    Link to post
    Share on other sites

    Not all laptops are garbage just most of them. I am currently running a panasonic toughbook which is typically only sold for out of office work environments. The thing weighs a ton with all the armour and damage protection but I am going onto my fifth year without doing any kind of system reinstall. The only problem I had with it was in Quebec this summer were the high humidity heat waves made my dvd drive temporarily useless. This one definitely suffers the lack of upgradeability but the durability has let me take at all over the world. For the record this is the only Intel chip I've had success with but because its in a nigh indestructable tank of a laptop weighing in at 3.3 kilograms.

    Share this post


    Link to post
    Share on other sites

    The Intel vs. AMD debate isn't very complicated if you're willing to steer clear of all the fanboyism and "I had bad experiences with XYZ and they suck!" comments.  The history of perforance of Intel vs AMD goes something like this: Intel and AMD are essentially tied for performance, then Intel starts releasing chips that boast faster performance than AMD can offer, then Intel ups the ante by offering new chips faster than AMD while still pushing the performance gap wider, performance gap continues to grow to the point that it becomes an embarrassment to AMD, then a radical shift in processor design occurs and AMD catches back up to Intel and the two are once again locked in a battle for processor speed supremecy.  This cycle repeats itself about every 5-6 years.  To put it another way, Intel and AMD offer about the same processor performance when they are working with new technology, however, over the next several years, Intel will push that technology further and faster than AMD.

    However, it's not just about processor speed, it's also about other features that are loaded onto the chip.  Intel's HyperThreading technology was initially a dud, however, it is being dusted off again in a new and very much improved version.  The new Sandy Bridge processor family is coming loaded with a whole host of new processor instructions that will allow "Sandy Bridge aware" programs to take advantage of the processor in ways that were never before possible.  Additionally, Sandy Bridge is offering Intel's new "Insider" technology (whether this is really DRM is for another discussion), which is currently the only technology that will be supported for new 1080p streaming video broadcasts.  AMD on the other hand is offering things Intel is not, like native DirectX 11 support in the processor, and possibly a new type of "reverse multithreading" technology that is theoretically capable of performance gains greater than Intel's HyperThreading technology.  Additionally, AMD has guaranteed that there will be "Fusion aware" software that takes advantage of the new core technology; Intel is assumed to have similar software guarantees, but there is little press evidence of it at the moment.

    There is also the issue of slot type.  AMD typically wins this competition because it uses its slot types for a wider variety of chips than Intel and for a longer period of time, which means easier upgrade compatibility in the future.  Going with Intel requires paying closer attention to the slot types in use and which ones will likely be supporting future chip releases.

    Additionally, one also need consider that AMD's and Intel's chips typically excel in different areas.  When AMD and Intel are offering chips that are very similar in overall performance, AMD often takes the crown in floating point math, but Intel always takes the crown in integer math.

    All of the above having been said, my best advice to you is to wait.  AMD and Intel are about to butt heads in another major processor supremecy battle and no one is entirely sure how it is going to play out.  Currently, very few people are running more than a quad-core system (and most people aren't even running that).  Intel is predicting octo-core systems will be commonplace by 2012, with 16 core systems either just entering the market or just on the horizon.  Also, somewhere in all of this NVIDIA is looking at making an upset and challenging both AMD and Intel for the CPU market. If at all possible, you want to just sit on the hardware you currently have and wait to see what the tech landscape looks like after the industry giants get done slugging things out.

    Originally posted by: A Nonny Moose

    Live and learn.  Laptops are the bane of the computer world.  They are restricted in expansion, and have none of the good things of a modern notepad.  I think their days are numbered.quote>

    Laptops still have a long life ahead of them.  They have been gaining strength for the past decade, and the current drive towards an entirely mobile world is only fueling this strength.  If anything, it's desktops that are looking at the graveyard; PC manufacturers are looking at their desktop sales continue to drop, and Dell is already planning for a post-desktop world.

    Don't be hyped by the Intel bumpf on hyperthreading.  You can't hyperthread anything if the program isn't written to spawn (fork) separate processes that will use it.quote>
     

    HyperThreading technology wasn't developed so a single end-user program could spawn two different processes and then push both of them through the same processor at the same time; it was developed so two different end-user programs could both get access to the processor at the same time.  Programs that are designed to split work up between processors are supposed to actually use two physically separate processors, not a single processor with some fancy hyperthreading capabilities.

    Share this post


    Link to post
    Share on other sites

    Well, now.  I think that pretty much caps the debate.  Hym is quite correct.  If you go for a new design chip, how long do you have to wait for software?  Intel's new design will probably drive a new version of Windows on to the market, probably too soon, and we'll have another Vista on our hands.

    You'll notice that Hym quite correctly states that any software will have to be "aware" of the processor features, which means that they can be written exclusively to run there or be carefully written to run with or without such features.  The same applies universally to programming.  Good software people take this all in stride, and make their stuff able to run pretty much anywhere.  This means avoiding specialty API calls on a specific operating system, and adherence to standards, of which there are many.

    One of the features that AMD is working on, and apparently has working on the new hexaprocessor chips is the idea of ganging processors, so that multiple processors behave like a CDC6600 (of fine vintage).  I have little evidence for this, but it is being talked about.  My BIOS has the capability of handling this apparemtnly since it always says I am in non-ganged mode.  My vendor says it is applicable only to quad cores and above.  My system, by the way, has a socket that accommodates an AMD3 chip if I decide ever to upgrade.  Good thing that DDR2 sticks come in up to 8MB slices.

    So, my friend, you are probably still in a quandary.  On the socket issue alone, I would choose AMD, because Intel changes their socket faster than most girls change their mind.  This is good for mobo makers, but not for users. 

    Unless you are an operating system programmer, I don't think it makes much difference to you which processors you get.

    Share this post


    Link to post
    Share on other sites
  • Original Poster
  • Yea I think I'll just sit and watch the CPU slugfest begin! This year sounds like it will be very interesting in the CPU market. I really hope that AMD is able to produce a strong product with its bulldozer line. Waiting around also gives me time to save up enough money to get some sweet parts when the time is right. Now I understand the desktop is dying, but the workstation and gaming computers will always have their place in the world right? I mean unless technology radically changes the workstation/gaming PCs will always hold a performance advantage. Of course this only applies to the people who will actually need the performance since most individuals who own computers only perform basic tasks.

    @Hym: What exactly is the advantage of being able to process floating numbers against processing integer math as you stated above? What are the advantages of each; basically I'm asking why does it matter to have one or the other in terms of performance or your CPUs ability?

    Share this post


    Link to post
    Share on other sites

    I am going to jump the gun on hym here.  He can also give his opinion/facts.

    Floating point operations are common in graphics.  Integer operations are more common in accounting.

    Splines and hidden line processing is almost all floating point if not entirely.  The reason AMD excels in this area is because thay actually have floating point hardware as opposed to Intel who, in the past, simulate this through their firmware decor.  Now that Intel has licensed AMD's technology, who knows what direction they will take.  Both decors have inherent integer instructions.

    You are wise to sit in the weeds and watch.

    Just as a matter of interest, why do you want to build a supercomputer?  Are you planning to plot correction free courses to Mars?

    Share this post


    Link to post
    Share on other sites
  • Original Poster
  • Haha! Alot of people have been asking why am I looking to build such a fast machine. Well part of it is that I have a tendency to want the best and I don't mind paying for quality when I'm investing my time and money in something. It will also provide me with a bit of a hobby. Also I'm currently a second year civil engineering student and somewhere down the line I'm eventually going to need a workstation computer but I also want something I can have fun with for gaming and movies without worrying about lag or memory issues. I want to eventually learn how to use 3ds max but right now my laptop is having trouble just running google sketchup which is not even as demanding as 3ds max. I also want to invest in the newer technologies since thats whats going to be around for the next 3-5 years as AMD and Intel change their socket types.

    So in short, I have a problem with wanting the best if I know I can obtain it with hard work, and I have a thing for performance, but also I'm making an investment in my future for when I'll actually require the power.

    Share this post


    Link to post
    Share on other sites

    Originally posted by: gio10469

    @Hym: What exactly is the advantage of being able to process floating numbers against processing integer math as you stated above? What are the advantages of each; basically I'm asking why does it matter to have one or the other in terms of performance or your CPUs ability?quote>

    Graphics and multimedia, digital design and simulation, and other things are usually floating point based and your processor's ability to handle these kinds of applications is directly correlated to its ability to handle floating point math.  However, most other stuff is integer based math, and having a strong integer math engine improves performance in those programs.  I know that isn't a wonderful answer, but the subject is very complicated and it takes in-depth examples to really explain it.  Unfortunately, I don't really have time to work up a post with all that in it.

    Originally posted by: A Nonny Moose

    Now that Intel has licensed AMD's technology, who knows what direction they will take.quote>

    Your supplier lives in the world of 5 years ago, when the AMD64 chips were all the rage and Intel's lack of a strong counter-product had people whispering about Intel possibly licensing AMD's technology.  However, that ship has long sailed and been lost at sea.  While all the tech reviewers were salivating over the 64-bit CPU battles, Intel was already planning the Sandy Bridge architecture, and it sure as hell wasn't going to be based on any AMD technology.

    Share this post


    Link to post
    Share on other sites

    Tell us more about Sandy Bridge, as I am too lazy to hunt around the net for this.

    I am currently expecting a generation of 128-bit architecture, which will almost be up to the main frame capabilites of the 1990s.  The last main frame I used had a 320 bit memory fetch for four double words of 80 bits each.  The word structure was 72 bits wide with 8 parity and EDAC bits tagged along.  This split into two 36 bit words each having four nine-bit "bytes" uniquely addressable through the addressing scheme to the bit level.  A little old and heavy handed but it could move an entire megabyte over by one bit.  Long instructions like that were interruptable.

    The last AIX box from IBM I used more than ten years ago had two 128-bit power PCs in the guts.  So the microcomputer game is running pretty late, probably because of consumer lag.  Quantum leaps have to be orchestrated carefully.

    Share this post


    Link to post
    Share on other sites
  • Original Poster
  • Well Sandy Bridge is aimed at low-end and mainstream consumers looking for powerful integrated graphics within the CPU itself, but if you buy an unlocked CPU you have the ability to overclock to 5GHz with their i7 2600K (the K indicates that the processor is unlocked) as long as you are using discrete graphics. Sandy Bridge uses the 32nm structure and is supposed to produce higher performance with less electricity. Off the top of my head, I believe they run at 95w compared to 125w of the last generation. This allows the chip to stay cooler when overclocking as well as when running in normal conditions. For the mainstream laptop this means longer battery life as well. Sandy Bridge is a replacement for Intel's 1156 socket, and its just a stepping stone for their later release of their Ivy Bridge processors which are supposed to be high-end CPUs to replace socket 1366 with socket 2011.

    I believe I covered the basics but if there is anything else you would like to know that I missed, feel free to ask.

    Share this post


    Link to post
    Share on other sites

    Thanks.  Packing all your birds or functions into one package leaves you with a single point of failure.  I am somewhat enamoured of the idea that if I want a specific graphics package I can have it without having to kill half the circuits in my processor.

    We are slowly becoming so single-bus oriented that some processors are going to be in the antiques class before they even go out of use.

    Here is a simple cross barring diagram showing how active units (logic processors, I/O processors) as many as you like, can be crossbarred to memory controllers, as many as you like.  The system control must lie in the memory controllers as they are the gateway to information to be processed.  An active unit simply raises a memory cycle request with a memory address, and whichever memory controller has that address responds.  If the memories are also crossbarred, multiple memory controllers will respond to a multiword request.

    crossbar.jpg

    The active units can be anything.  Generally communication sybsystems are attached to an I/O processor, but with additional logic could easily be attached to the memory controller directly.  Naturally, DMA is the order of the day for I/O controllers unless the line is very slow.

    Each data path is a separate bus.

    Share this post


    Link to post
    Share on other sites
  • Original Poster
  • Well as transistors get smaller, I guess manufacturers can't help but think of what they can do with all the empty space. next thing you know, everything will be integrated into the chip! I'm skeptical myself about packing everything into a single chip. It's much easier to simply replace a video card rather than having to buy a new CPU just because your graphics fried, but then again they are shooting this to the mainstream market and the average person wouldn't even know where the point of failure lies.

    So if I'm understanding this diagram correctly, this diagram shows 4 buses, correct? I'm sorry I still have alot of learning to do when it comes to internal architecture and CPU structuring. What do the dots represent?

    Share this post


    Link to post
    Share on other sites

    Originally posted by: gio10469

    Well as transistors get smaller, I guess manufacturers can't help but think of what they can do with all the empty space. next thing you know, everything will be integrated into the chip! I'm skeptical myself about packing everything into a single chip. It's much easier to simply replace a video card rather than having to buy a new CPU just because your graphics fried, but then again they are shooting this to the mainstream market and the average person wouldn't even know where the point of failure lies.

    So if I'm understanding this diagram correctly, this diagram shows 4 buses, correct? I'm sorry I still have alot of learning to do when it comes to internal architecture and CPU structuring. What do the dots represent?quote>

    The dots represent "more of the same".  In the last mainframe I used, the memory modules, of which you could configure four, had eight ports each which could be cross-connected to up to eight active units all crossbarred together.  Actually there could be up to 32 bus connections between the memories and the active units.  This was a physical limitation of the hardware, not a logical limitation in the operating system.  An earlier version of the hardware allowed 16 active units, so each memory port could handle 16 buses each, and also could be crossbarred.  I think I remember in that one you could have eight memory controllers, but the memories were small, only 1MB each (late 1950s) of large ferrite core memories.  If you want to see the details, go to the MIT library and look up Project MAC and Multics.  In both of these machines, and I/O processor had 32 channels and all of them could be attached to communications processors that could handle hundreds of ports.  These machines were designed to be a public utility, then the microcomputer came out and someone added the Internet.

    The CPUs on this baby had several separate operations units: The binary unit for integer and floating point binary and logic instructions; the decimal unit for 64 digit operations both fixed and floating point; and the vector unit for vector operations, including matrix inversion; and the operations unit that ran a nine stage pipeline and controlled the other units.  Branching, conditional or not, consisted of switching to the correct internal cache in the pipeline.  Took a few nanoseconds.

    Share this post


    Link to post
    Share on other sites

    Create an account or sign in to comment

    You need to be a member in order to leave a comment

    Create an Account  

    Sign up to join our friendly community. It's easy!  

    Register a New Account

    Sign In  

    Already have an account? Sign in here.

    Sign In Now

    Sign in to follow this  

    • Recently Browsing   0 members

      No registered users viewing this page.

    ×

    Help Keep Simtropolis Online, Open & Free!

    stexcollection-header.png

    Would you be able to help us catch up after a bit of a shortfall?

    We had a small shortfall last month. Your donation today would help us catch up for this month.

    Make a Donation, Get a Gift!

    We need to continue to raise enough money each month to pay for expenses which includes hardware, bandwidth, software licenses, support licenses and other necessary 3rd party costs.

    By way of a "Thank You" gift, we'd like to send you our STEX Collector's DVD. It's some of the best buildings, lots, maps and mods collected for you over the years. Check out the STEX Collections for more info.

    Each donation helps keep Simtropolis online, open and free!

    Thank you for reading and enjoy the site!

    More About STEX Collections