We consistently carry out our spirit of ''Innovation bringing development, Highly-quality ensuring subsistence, Management promoting benefit, Credit attracting customers for smd transistor,j3y smd,2ty transistor,1am smd transistor,g1 smd transistor.We are sincerely looking forward to establishing good cooperative relationships with customers from at home and abroad for creating a bright future together. The product will supply to all over the world, such asKenya ,Nicaragua ,Oman ,Serbia ,South Korea ,Really should any of these items be of interest to you, please let us know. We will be pleased to give you a quotation upon receipt of one's detailed specifications. We've our personal specialist R&D enginners to meet any of the requriements, We look forward to receiving your enquires soon and hope to have the chance to work together with you inside the future. Welcome to take a look at our organization.
Prepare to rumble! [Thierry] made exactly the same Hello-World style
(Now technically
! ) See how the experience is.
However, it is not only an LED blinker. He added a light detection function to enable it only at night. It uses Forest Mims techniques to reverse bias the LED and waits for it to release its internal capacitance. However, the point is that it allows the chip to do something, not just sleep.
Although Thierry is habitually an AVR user, he still prefers PIC because it consumes less power when idling, awake and doing some calculations. This is mainly because the PIC has an on-board low-power oscillator that allows it to operate at a speed of 32 kHz, and also because the chip's power consumption is generally low. Finally, this may be 10% higher than PIC power consumption.
If you have the ability to use one of the two chips instead of the other, his two versions of the same code will be an excellent way to familiarize you with the other. We really like him
This feature can take full advantage of the sleep mode on both chips during LED discharge. To be honest, at this level, the codes of the two are similar rather than different.
(Oh, did you notice that [Thierry] uses a paper clip as a button battery holder? This is a hack!)
Surprisingly, we managed to avoid the bullets left over in the occasional crossfire between PIC and AVR enthusiasts. We have
, Despite being similarly close, PIC also won this round. Will Microchip's purchase of Atmel products calm the flames? Let's find it in the comments section. We are ready for popcorn!
Waiting for MSP430 in this battle!
Why is it (supposedly) super power-saving one of the ARM chips better than 8-bit while we use it? (Probably something based on M0+)
Technically speaking, the 16-bit MSP430 will also be better than 8-bit.
Yes
+1: Low power consumption is one of the best features of MSP430. I can't wait to know how it will be with AVR and PIC. Or should I say, if thrift is the name of the game, how will they compete with it...
Compare it with the sleep current of PIC12F629.
Check out the parameter D020 in the data sheet.
OTOH can use the commonly used GCC C compiler to develop for MSP and AVR. The compiler is FLOSS/Linux friendly, is quite good in code optimization, and can compile "commonly used" C code, so if it is not part of C++ , You can use advanced C99 function Arduino yes. Needless to say, it is more interesting than the completely clumsy PIC devtools, and if you want to buy millions of units, saving a lot of price is the only problem. This is not about DIY/HACK at all, saving $0.1 with additional development time is not worth it at all.
My biggest impression of MSP430 is that the proprietary gdbproxy program binary file needs to use gdb to communicate with the device via JTAG. I remember it only works on Windows.
The situation may have changed now, but since then I have been reluctant to approach MSP430, especially because ARM chips seem to be able to do more or even more things.
4RedHatter: IIRC should have open source debugging tools for MSP and GDB interfaces recently. Although I haven't used MSP430 for a while, I am not a big fan of hardware debugging at TBH. But nowadays, because OpenOCD has gained a certain appeal, hardware debugging under Linux has been greatly improved (for me, OpenOCD is still a valuable tool, such as JTAG removal for SOHO routers and the like).
As for MSP430, I did not really understand their point of view. Of course, they are excellent in terms of power consumption, but in the end, I really care about the battery power, which depends on the radio RX/TX time, not the power of the microcontroller. Therefore, I have no problems with STM32 (especially STM32L series). In a certain way, 32-bit things may have moderate power requirements. Maybe it is not unprecedented, but its real 32-bit has a good cpu core, many superb peripherals and a more common ARM architecture, and various tools around it can better support it. These days, I really don’t know about the 16-bit proprietary architecture. MSP is not bad. Only for some very niche markets, which are relatively unpopular these days, now there are only 16 bits, which means there is no way to expand in an elegant way. Not to mention 16-bit suxx, etc.
"Arduino by ATTiny"
I am not surprised by the results.
Well... after studying the code, arduino seems to be just a programmer.
There is nothing wrong with Arduino.
I currently have an "Arduino" powered ATTiny5 (yes, 6-pin Tiny5, not Tinyx5) on my desk, which has amazing 32 bytes of RAM and 512 bytes of Flash...and there is enough space to do it Some useful things.... Frankly speaking, I used my own ATTinyCore branch, and in the process I did some work to optimize it.
Just because something is "Arduino" doesn't mean it will automatically swell or starve.
I am second. I use Arduino as an IDE for many things. I use ATTiny84 to drive a two-line LCD display into an "LCD backpack".
I did this mainly because I was lazy to reconnect the LiquidCrystal library to run independently, but still.
Hmm... compared the old "regular" equipment with the new "ultra-high low-power optimized" equipment.
Anyway, if you add a prescaler and turn the clock down a bit, I will be curious about the tiny burn. The maximum is 256? That would be 500Hz operation.
Not necessarily better.
At a lower clock frequency, compared to a higher clock frequency, the CPU stays awake for longer and can complete the same work.
This means that unproductive leakage current will also flow for a longer time.
Generally, it is best to let the kernel run at a higher speed, but for a shorter time than other methods.
Yes, I understand this is very curious, why the author of the test did not do this, at least in some aspects a 1:1 comparison.
I have used paper clips as cheap button battery boxes for many small projects and hacking activities-they work very well.
(I hope this is the correct way to post pictures here...)
Very good, I never thought you could solder paper clips.
Me too, until I run out. Oh, get it out, everything is fine. works.
Some of them. Some paper clips are enamel-coated, so you obviously want to avoid using them.
And staples. Also common galvanized hardware, such as nuts. Most things are solderable, especially when supplemented with flux.
…_.PNG
PIC is better than AVR to some extent, except that there is no easy way to program in a high-level language without spending a lot of money. It is better than AVR. If the microchip can be AVR-friendly (free) like arduino, then it won't have any discussion. By the time the microchip realizes and buys atmel, ARM has performed very well, if not, it will eventually make the 8-bit microchip not worth your time.
Because you invested it in time...that is what makes 8-bit microcontrollers worthwhile in many places. They are easier to operate and learn, so it takes less time to complete the work. Of course, there are many factors at play... There are more new devices: the ultra-simple ARM, with almost no peripherals comparable to 8-bit microcontrollers. 8-bit microcontroller with ultra-complex peripherals. and many more…
You can use the open source SDCC to program the PIC in C language. The uIP TCP stack has been ported to SDCC on the PIC.
Like I said, there are some simple things... The simplicity of Arduino is probably not in the same universe.
what
you can. Unfortunately, the goal of PIC is not so perfect. For example, if a local static variable is not initialized properly, you will not receive any warnings or any other information, and you need to use global variables (which really got me stuck). The code generation is not very good either.
SDCC is great, as far as I know, it is the only free (if free) C compiler for PIC. I just hope things will continue.
Shelling cash? The PIC tool chain is free. It is true that certain optimizations have been disabled, but you can program in C for free.
You mean, if you don’t pay, the code will be swollen on purpose
Non-optimized means that it will convert from C to asm in a ratio of approximately 1:1. In the background, the compiler can do many tricks to make your program run faster or take up less space. If you don't pay, these tricks will be disabled.
>Pull out cash? The PIC tool chain is free.
But it can only be used in Windows IIRC, which is almost inconvenient for developers and similar personnel. Linux is better for developers and advanced users. If you want to call the "Automation Core" or "IoT Center" or "Gateway", Linux will also have a lot of advantages, which is useful for using small-scale devices like Pi and Linux. ARM SBC is very convenient to implement, so high-level PC-level networks may encounter low-level microcontrollers, etc., and finally merge the two worlds. This is how the IoT function starts.
Not to mention that learning "free" microchip products is 100% vendor locked, and the knowledge gained will be rarely used elsewhere, because other architectures are completely different, and their tool chains cannot really support other functions. Therefore, people fall into the on-foot arch and have to spend a lot of energy using other things, learning brand new tool chains and completely different arches. Said, Cortex M things usually use the peripheral abstract library "CMSIS", when it comes to code rewriting, the library can be moderately migrated to other MCU vendors.
Pooh. I used to use 16F series PIC. Once I needed more RAM, I found out that if I want to avoid the storage area switching abnormality, I must use an external RAM chip. For the price of PIC + RAM, I can buy an AVR that includes all of them.
I don't think 8-bit MCU will disappear anytime soon. They are still the highest selling.
On the one hand, there are traditional products that can support, and secondly, when designing for mass production, I think the cheapest 32-bit ARM MCU can compete with the cheapest 8-bit. The engineer will consider meeting the demand at the lowest cost. If 8-bit can be cheaper than 32-bit ARM M0, then he will choose 8-bit.
I kind of agree with you. But looking at how they started to make very simple arms (think of STM32F031), it may be that they will be able to attract people and familiarize themselves with the environment.
You have not seen the PIC series IDE that Microchip can provide. For the 8-bit, 16-bit and 32-bit Microchip PIC lines of the IDE, there is a free C compiler. It even includes a complete C environment different from the Arduino IDE, so you can perform operations like "#include stdio.h" and then use printf and friends. By the way, before they bought Atmel, it was added to Microchip IDE.
The last time I used 8-bit MCU in a professional environment was more than 10 years ago (cypress ezusb stuff)
They walked along the dodo. Even if they are still large, they mainly come from traditional low-cost outsourcing projects.
If you are a student/young engineer, don't waste time on 8/16 bits, even on trolls like PIC vs AVR, this is a thing of the past.
I also made a built-in pcb battery holder using more parts of the paperclip :-)
Top surface:
From the bottom:
Although it is doubtful from a mass production point of view, I absolutely *like* such a full-board implementation design. Looks great!
Also like other components of through-pcb!
Ah, so beautiful. well done! !
awesome
If you want to arrange the layout of the PCB, why not use a suitable battery holder?
What material is the paper clip made of?
What material is the surface of the battery made of?
When these two meet and you energize through their connection, what happens in a continuously high humidity environment?
Paper clips do not have the flexibility required for this job. Put the equipment in the environmental chamber and recycle it.
In other words, your husband is very smart!
Paper clip: steel; battery: (stainless steel) steel.
But anyway: maintaining high (condensing) humidity around the coin cell is not a good idea. The distance between the shell (+) and (-)-poles is very small, resulting in very fast creep and corrosion. Current is not a problem, but a permanent potential difference (voltage) of 3V is a problem.
Low elasticity may become a problem.
As shown in the "bottom" photo, misaligned units will short the positive and negative terminals against the edge of the paper clip. If the coin cell battery is turned over, this will not be a problem.
Ridiculously simplistic. I like PICs because they have a more complete set of random integrated peripherals, but this is difficult to measure in a single competition (although I think you have implemented it in a 32kHz oscillator).
I never use more than two or three in each project, but I use different sets for each project, so PIC wins on average. Plus AVR users are bad engineers-see arduino. Haha! sad.
true?
I mean, he did switch to PIC recently, but...yes, anyone who says that AVR is only used by bad engineers needs to talk to Professor Land.
Real engineers do not use compilers unless they have 32 bits
Don't gather everyone together. What I want to say is that even if you can have the same uC, you should distinguish between arduino users and avr users. There are many people who use AVR for excellent work without using arduino nonsense.
When someone designs products based on the ATtiny platform (and AVR-C), I humbly agree with this statement.
Attempting to derail the PIC was too obvious, and this caused the discussion to fall into the AVR = Arduino fire. It's too obvious.
Now let's compare pic24 series and avr
Is 128kHz really the slowest running speed of ATtiny? With the correct fuse settings (watchdog oscillator clock, CKDIV8 fuse position 1), is it impossible to run it at 16kHz? Whether this really reduces total electricity consumption is another question.
I started from the day with pictures-without even interruption. As they grow, they are very loyal to their customers, enabling them to easily upgrade chips and backport existing code. Almost no one had a free compiler at the time, and atmel was "Who is that?". This is a good standalone picture, an 8051, or something more complicated and expensive. With the growth of pictures, really fancy peripherals have become the magic weapon for many embedded designs I have done for Telco and other companies. You can even use digital I/O as an analog gate (short-circuit or not short-circuit the signal to ground-and extremely low ground bounce noise), or Schmitt trigger input and very few capacitors (capacitors and FETs) without built-in Analog/digital converter for digital chip.
There are a wide variety of compilers, from CCS ($35 at the time, not satisfactory at the time, but working now, now at $300, which is very good) to HiTech-it was about $1,000 at the time and may often beat experienced ASM programmers , Which later appeared in Microchip’s $300 product, it was really attractive (I even tried to make money back from DigiKey), and it didn’t even handle interrupts or save temporary variables-it just copied their application notes and eventually copied them to CCS The new expensive version works well, or I ended up using it with hockey (we have done that with very little taste, all of which worked).
But the advent of Arduino changed this. I no longer design for more than 100,000 products. Just to meet a cheap Chinese Ard clone, you have to make a joke. Compared with the best IDE, the IDE is bad, but it works, and all the good code (CCS also has some good libs, not so comprehensive) and the community... there is no competition at all.
For most of my entire career, I tried to persuade various large companies, Microchip, TI and FPGA manufacturers, to insist that their tool set should be the cost center, but the success achieved was limited. I pointed out that as a designer, I am sometimes unwilling to pay more than $10,000 for the "privilege" of designing DSP or FPGA as a customer's product.
Some of them did it, some did not, at least some development boards came out, but they were still priced at "cost center". These idiots don't realize that what a university laboratory that is always financially challenged can afford is the things that will be designed into the main product after the child graduates. Some failed due to my shortsightedness.
It's not that some of us didn't set an example (no, I don't do this anymore, but this is from the overpriced basic stamps, and it's much better):
The part number should be more or less dated. I no longer maintain this particular site.
Looks familiar, anyone?
I like projects like this. When used with paper clips, it is also nice to build it on a general-purpose breakout board. In short, it warms my heart.
The interesting thing is whether you can further increase the power usage, set pull-up resistors on unused pins, adjust clock settings, etc. Of course, LEDs will drown out micro power consumption, but obviously this is not the focus of the project.
It will also be interesting to compare ATtiny10 with PIC10F. Hope they can further improve power efficiency and "fly to the moon" in the style of 1 uA Dave Jones (EEVblog).
The battle between PIC and AVR is so in the 90s. Perhaps the best is the early 00s. How about STM32 vs TIVAC / MSP432 vs ATSAMD vs AVR32 vs PIC32?
I think it should be more like STM32 vs LPC vs *add other ARM brands*
Currently, STM seems to be winning this battle due to the availability of easy-to-use open source tools (ac6).
You can add NXP Kinetis because they acquired Freescale, and Kinetis is very strong in the industrial market (but few amateurs).
I hope people understand the functions of integrated peripherals.
There are too many high-level language programmers on this site. Code is their hammer. The molar speed is fast, the molar memory is good, and the molar effect is better.
I recently redesigned a customer circuit, from a four-channel PWM motor controller controlled by its 100MIPS software to a 4MIPS 8-bit PIC using its integrated module. I can even add a lot of things that the previous controller could not handle because it was too tiring.
I did something similar last year. Previous engineers tried to measure two fast pulses by simply polling the input using a very fast processor. I used the PICs capture comparison module instead of the code, so I was able to eliminate the two ~$5 processors on the board. This alone reduces the BOM cost by 50%, but because it consumes much less power, takes up less space, and requires fewer supporting components, etc.... The final BOM cost is reduced by 80%.
I am second.
The newer analog peripherals in PIC are hard to beat. You can now use them to build a complex SMPS or LED driver.
There is no doubt that peripheral equipment is very important. I agree...the newer analog peripherals on the PIC are very surprising.
Now, if only Microchip can release CPU cores worth the damn...come with these amazing super duper peripherals...and provide a decent, unrestricted open source compiler... dream
Yes, the PIC peripherals are great, well thought out, and have some amazing features. But I agree, the CPU core is very powerful.
A previous poster said that the compiler is "free," but only if you use a compiler that "deliberately adds" garbage bloated code. For anyone reading the output ASM, it will only make your blood boil so obvious and deliberately. IMO is just a (mouldy) carrot that attracts users to pay.
Maybe Microchip now owns PIC peripherals at the same time and pushes them onto the Atmel CPU core? Atmel cores are much better than PIC cores, but not as good as some. At least doing so would be a step in the right direction.
Does AVR-gcc support chips with Avr core and pic-periferals? It must be feasible now, right?
I agree very much overall. PIC peripherals are very complete and hard to beat. Although the core was once outdated, I found myself often attracted to PIC.
Having said that, I have been following the new ATtiny817 series. The peripherals are not exactly the same as the PIC16, but they are still impressive (most notably, it lacks peripheral pin selection, which is unfortunate). Maybe it heralds the emergence of more beautiful things?
Lol, LED/Bucks/Boosts/SEPIC controlled by software is nothing new.
Say, watch
These crazy Russians. What do you see on the homepage? Only use AtTiny's full software BUCK circuit. The thread has a history of 2.5 years, and as of 2014, the circuit is not completely new. Before that, there were almost no SEPIC and buck drives. Some smartass thought of using TS5A3159 (1-ohm analog switch) as a low-voltage "FET driver" and IRLHM630 (40Amp in the compact PowerQFN) FET to achieve more efficient power conversion in the "final" custom torch. The circuit mentioned is a "simplified" version using the IRLHS6342 FET. Although the drive frequency is high, it can "only" provide 8/19Amp current, but the gate charge is much lower, so it can pass through the microcontroller port" Direct drive". This is how people get a multi-mode, efficient, and full software SEPIC / BUCK / BOOST. Note that they have inserted their buck coil into the so-called "ground" rail to be able to use N-FET and "direct drive" instead of (inherently worse) P-FET :)
For completeness, for those weird ppl who want to play with fire and use a software-defined approach to invade the world:
The way to achieve a super-efficient, ultra-compact, multi-mode BOOST converter (for the most advanced LED flashlight) using only Tiny 85:
(Unfortunately, only a few places can still understand 8-bit).
The notification scheme itself is very simple: take a look at Wikipedia, replace the abstract switch with N-fet, this is the software and excellent component selection and the clever cracking of the TS5A as a "low-voltage FET driver", which shakes this thing up. This interesting thread also showcases a hard-core DIY version of a very advanced PCB and a "proper" version of the same circuit manufactured in the factory. Yes, the best DIY does not care about QFN, and can finally submit its board to the fab for mass production. This is how DIY becomes an engineer.
Yes, it can be done, but that does not mean it is the best. Microcontroller switchers are not particularly high-performance, because ADC/processing speed/PWM is the bottleneck of overall bandwidth/switching frequency. Jitter in the control loop can affect regulation.
Today, you can get switches with on-chip switches with a PWM frequency of 1.5MHz. You can get very small inductors, use only ceramic capacitors, and have very fast transient response when running at SMPS at those frequencies.
>Yes, it can be done, but that does not mean it is optimal.
These solutions use 250kHz, 2.2uH coils, epic PMEG diodes, and 100uF cercaps. Reduce the pressure to an efficiency of more than 95%, or increase by more than 90%. Task-specific "cheating" in "buck": Once the battery is exhausted to D = 100%, the switch stops and the FET is fully opened. The efficiency reaches 99%. However, the LED will never exceed the rated current, because this trick only occurs on a depleted battery, which cannot exceed the limit.
These guys are obsessed with efficiency. The flashlight also implies a narrow space. They make circuits for themselves, trying to get most of it (fun), so they can afford the final components and designs. Some designs use multiple Schottkys in parallel to further improve efficiency and/or current handling capabilities. Not to mention that they have spent a lot of time making it as small as possible. Some of them managed to fit it with 15mm flashlights.
By the way, have you noticed that these crazy guys are using IRLHM620? Feel free to use Google PDF. This thing is epic, 40A FET in QFN. Yes, they solder QFNs and use direct toner transfer for prototype batch processing (sometimes they also roll out real batch processing in nearby factories). Hello breadboard lovers :-).
> Very small inductors can be obtained using only ceramic caps
Since their task is assumed to be small, they are forced to do exactly the same thing. Therefore, they put a lot of effort into optimizing the selection of parts. They have ideas about handover failure vs
> And it has very fast transient response when running SMPS at those frequencies.
OTOH 1.5MHz high current circuit has very high requirements on PCB wiring. Even 250kHz will not consume too much power: if the MCU is placed too close to the coil, it will immediately hang. By the way, you understand that they are performing 4-5 different currents according to the user's choice, right? Dare to show 5 different modes on 1.5MHz SMPS? This special set of circuits cannot handle fast transients: battery discharge is a slow process. If someone cares about fast transients, they should check it twice to ensure that the MCU can meet the requirements. However, if someone signs up for SMPS, they must learn something. Otherwise they are destined to face a lot of trouble. Sorry to inform breadboard lovers, but you cannot do real SMPS on breadboards. Even at 100kHz will be a disaster. By the way, these people are concerned about Rds and computer conductance and switching losses. It is recommended to use the low-power version of the IRLHS6242 FET replacement circuit. The Rds_on of 620 is final, but the gate charge is high. The gate charge of OTOH IRLHS6242 is so low and the switching curve is so steep that uC can drive it directly through GPIO without the TS5A "gate driver". It only costs a few percent of efficiency, and 6242 can handle less current, while Rds_on is higher.
If people need fast data transfers without making the CPU core busy, then just defeat the peripherals that support DMA. The only advantage of AVR over STM32 is that the timing is more predictable/less jitter, which is enough to enable one to realize USB low speed 1.5Mbps in pure software. STM32 will not do this, but it can do more feasible things.
Not to mention that even STM32 ICs priced at $1 have a powerful timer, and at least one of them also drives a true three-phase PWM, can automatically insert dead zones, and so on. Guess what this means for motor control. Overall, STM32 timers are cool. Even for a $1 IC, there are many fast and advanced peripherals that beat PIC and AVR. When it comes to DIY, waste disposal like PIC is really not the best choice, unless someone really plans to make 100 million devices, so the horror of PIC development can recover the cost.
At current prices, superb cpu cores and final peripherals prices, 32-bit MCUs will show people who is the king of these 8-bit products, and the 8-bit market has already squeezed 32-bit M0 / M3;)
I can eliminate the interrupt jitter on the STM32F030 of my VGA terminal project. I only need C code, run the code in FLASH, and don’t even need compiler instructions. IRQ triggers the video output of DMA, and even the jitter of any clock cycle will make the display chaotic. More advanced DMA (such as NXP / Freescale Kinetis) does not even need IRQ to trigger DMA.
With the advancement of peripherals, you hardly have to worry about jitter. Why bitbang when your hardware does the work for you.
Well, the last time I saw NXP uC, their flash controllers have been sullen, asking people to call their clumsy api code to write to the flash, and the api has a lot of strange requirements. I don't know if they will get rid of this terrible nonsense in the future uC, but I am already very afraid not to touch the 10' NXP in the next 10 years. STM somehow made this part more to my taste.
As for bitbang, it might be interesting to understand the protocol and how things work, etc. If that is part of the plan. Otherwise, it will delay development for no reason. Regardless, when it comes to DIY, there is no need to save a few cents on parts. Even in real mass production, cost optimization is only meaningful for real mass production, because the benefits of lowering prices cover these tasks. It’s not epic to save $5 by spending 10 hours of extra work? In this case, the only acceptable thing is the failure of project management.
Just like in some Youtube Videoblog (I'm not sure if this is EEVBlog or something similar). This guy cracked a digital scale with an SPI interface between ADC and CPU. Ha has been talking about how fast this is (2 or 3 digital kHz range) for quite a long time, and it is not possible to capture EN and CLK by polling :-) His solution is to use an 8- or 9-core parallax propeller board. Instead, the SPI port in slave port mode is used, which has a history of decades even on the old 8051.
Just as the only tool you know is a hammer, every problem is like a nail. If not, a bigger hammer is needed. :-)
All these comments make it clear that the code is their heavy hammer.
Since they are actually the same company, it is great to see that the ATMega328 series has added features such as 12-bit ADC, dual UART and better peripheral pin selection, and of course the advantages that PIC has always had.
Take a look at the XMega series.
XMEGA32E5. lovely!
However, at the same price, you can get SAMD21E with 128K code. What is missing is the dual 12-bit DAC.
Sorry to inform Atmel, but when it comes to 32-bit and featured UC, they suck. STM32 got a good 12-bit DAC and ADC in the fairly cheap STM32F100. XMEGA is meaningless. Atmel's own proprietary 32-bit UC failed. They started Cortex M really slowly, so their peripherals still couldn't match their competitors. Atmel's A series cortex and M series MCUs are indeed not competitive. China's SoC defeated Atmel in the field of A series, STM / NXP and many other little-known suppliers (such as EnergyMicro, etc.), and attaches great importance to the M series.
It depends on what you are looking for. Comparing the 128k version of STM32F100 with SAMD21, their prices are roughly the same* (ST's price is much lower), ST has more materials, is faster, but is less energy efficient. I can't say that I have been won by ST's L series products, which are 50% more expensive than SAM, have the same performance, and are full of extras.
For me, as a hobby, due to the lack of higher performance, energy efficiency and familiarity with good IDEs, I cannot use xmega and SAM. However, according to my needs, I can put in a nucleus and some mbed.
*I am checking the price of a large local distributor. You can change the balance anywhere on earth. Not counted as fake Chinese garbage.
This kind of "energy saving" marketing BS is annoying. If my uC is in sleep state most of the time in most cases, and in any case, for example, RF RX / TX consumes most of the power, why should I care about it? All these interesting stories about super power efficiency are very interesting. But I have no major problems with STM32. By the way, they have STM32L, which can perform more demanding low-power tasks. The whole STM32 is very good because it can be expanded. They have a wide range, from the cheapest parts to the manufacture of 8-bit products, all competing with price, and high-end parts can even boot Linux (someone has promised support for some STM32 mainline kernels). Atmel? Okay, it's cool, GL migrated from the megabit to SAMD. Do they have the same peripherals that you are familiar with? Do they have comparable portfolios of comparable size? Otherwise you will become difficult? What's more, Atmel hasn't paid attention to CortexM for too long. I can't wait for every obstacle on this planet. If atmel gives an "underestimation" to power efficiency, then they should really look at their A series products instead of clumsy marketing. In terms of power efficiency, this undoubtedly requires some love. When it comes to Cortex A, is it really shameful that a $5 Chinese SoC like Allwinner and Rockchip beat atmel? In terms of price, performance, peripherals and performance per watt. No, they are not "counterfeit products". Atmel is more likely to resort to the production of Chinese IC clones. I regret to inform Atmel, but these days they do have quite a few competitors, and the competition is not fierce. Megas/xmegas are very expensive, but completely outdated, and the cortex-M from atmel is not epic.
@LinuxDude You are wrong about saving energy and becoming a BS. Check the standby current and running current in the data sheet by yourself. The situation is several times worse (perhaps many times), and it doesn't work. This cannot be ignored in battery-powered systems.
However, you are correct in most other things. Yes, atmel's product portfolio cannot be as good as ST, but this is not necessarily a bad thing. For many reasons, I don't need a 1K micro card to have the same ADC as a powerful 32-bit cortex processor.
I can't really comment on Atmel, because I have basically never used them.
I just want to inform you that EnergyMicro is dead, it has been acquired by Silicon Labs, and they have retained the EFM32 series and the names of these geckos.
We usually make quick cross-competitive comparisons between the larger Cortex-M manufacturers. But so far, we have always found the equipment suitable for the task in the ST catalog, so we insist on using it, because it is easier for FW-Dev to be with the same supplier, his library and tool chain. Now, our products range from L100 to F100, F200, F400 and even F700.
One thing can be said with certainty...In terms of microcontrollers, we now have tons of options. Although this is usually good, it means that people have to do more research/investigation to find the parts that best suit their needs. Personally, I like STM32F072/42, STM32F103 and STM32F411 parts. I can use the STM32Cube HAL library with all 4 parts and the port code between these parts, it is amazing!
I must also admit that the peripherals on the STM32 are completely useless, and in many cases can be said to be better than anything on the PIC. LinuxDude only mentioned DMA in the previous comments, which is complete nonsense. In addition to ADCs that can reach 2.4MSPS on STM32F4 and 5MSPS on F3, they also have powerful built-in flexibility such as injection channels, analog watchdogs and more functions. The timer is also great. UART and their fractional baud rate generator (F4). Part of what makes STM32 a bit daunting is the complexity of the peripherals. But complexity is the price of having such powerful peripherals.
You can also get the stm32F1 nano-shaped board (called bluepill) from aliexpress for $2, and ST-LINK from the same site for $2. Things really won't be much cheaper than this.
I didn't really use EnergyMicro, just evaluated some of their tasks. So I have not traced their fate. However, this seems to explain why I have seen SILabs ARM in some devices in recent days.
I also like the STM32 product portfolio. Reasonable and complete, without major defects and shortcomings. If they insist on using STM32, they can do what most microcontrollers should do. Maybe they are not the best in all aspects. But they quickly introduced Cortex M and managed to do it in a reasonable way, while certainly "good enough" for most tasks. Unlike Atmel, which has been behind trying to launch its own proprietary 32-bit products, while selling expensive but terrible atmegas. Atmel said that I have had enough. As far as I know, Atmel has not recovered from these strange decisions.
Like many people, I started with the classic microcontroller pic16F84. I created an rs232 programmer that I found in Eloktor magazine. This is indeed very primitive, slow and error-prone, especially in Windows 98. It can only program a few devices. Mplab x with xc has come a long way, and using pickit 3 programmers can program all available pictures and future equipment. Hit F6 to compile and program with one click. A separate hexadecimal programmer is no longer needed, and you do not need to reload the hex file after every change and recompilation.
Usability has also been greatly improved. At the beginning of this century, only the most popular photos were available, not atmels. Now I can choose any picture in the selection guide and order from my local supplier.
Over the years, I have used many peripherals in these photos and left a deep impression on them, that is, how much work they offload from the cpu can almost be regarded as a coprocessor. Especially on pic24 and pic32, such as automatic ADC capture and conversion. Certain things cannot be achieved using bit tapping, such as using the ctmu module to measure in the sub-nanosecond range or capturing fast logic timing on a floppy disk drive. The comparator can be used as a trigger for the ADC to automatically start sampling, effectively making it a digital oscilloscope. Manual triggering causes the oscilloscope track to lose synchronization and jump around.
I only use atmel chips for programming in arduinos and arduino studios. How impressive it is to set up and run it, but in a way, I prefer to use photos in my projects. I think I am more familiar with the image tool chain. It will take a lot of time to reach the same level. Indeed, the 8-bit image of flash memory is very low, but pic24 and pic32 are good choices. Generally, if there is insufficient memory, you must use compiler optimization, manually optimize the code or avoid libraries that take up a lot of memory. Especially printf and float usage can fill up 4k devices immediately. These are usually unnecessary, and the original replacement function usually does the job well.
Microchip seems to have recently released a new PIC32 series PIC32MM. The competition with arm cortex m0 is fierce. The clock frequency of PIC32MM is 3.17Coremark/Mhz, while the clock frequency of Cortex-m0 is 2.49.
Another PIC32MK series is also being released....... But there is very little information about the series.
The next free tool chain.
In any case, if you are not new to microcontrollers, STM32 is a victory! ! !
1) The price of low-end ICs may be as low as $1 or less. I didn't even get a volume price discount from the supplier, and bought 30 PCS at a time. Yes, this is for Cortex M3 (I'm really lucky with this deal).
2) Different exquisite packaging, such as TSOP-20 for low-density items, if more pins are needed, then classified as TQFP; for small but distinctive things, use QFN; People who put it into use even include CSP/BGA which is on the edge of technology. That is to say, if the advanced hobbysys knows how to do "direct toner transfer" correctly, then they can handle TQFP or even QFN:P. In this way, people can etch very interesting equipment on their kitchen.
3) Sophisticated peripheral equipment can be set according to your needs. Said it can do DMA. Imagine that the CPU and peripherals are running in parallel, and most of the data movement nonsense is offloaded from the CPU. So it can do more things.
4) Reasonable devtools. Including open source. Therefore, these applications can be developed, such as running Linux. Just like the usual gcc, OpenOCD also provides support for this. If someone is a fan of hardware debuggers, then cheap FTDI2232/4232 converters can be used for hardware debugging, and dozens of them can be easily integrated with OpenOCD use together. Now, you can perform high-profile hardware debugging for less than $20. For more than $100, there is no need to buy expensive proprietary tools.
5) A large number of libraries, off-the-shelf firmware, examples...-Nowadays, Cortex M is indeed very popular. There are good reasons!
6) Only use the built-in bootloader of ordinary UART. So you can use any 3.3V UART/TTL cable to blink it. Say, Atmel lacks it. Therefore, flashing an empty Atmel requires a "hardware programmer" circuit. If, for example, one on Arduion manages to corrupt the bootloader, it cannot be repaired using only the serial line, a "hardware programming interface" is required. This is not just a UART line, so any UART cable will not work, and the required circuit is more similar to a "hardware debugger".
7) Real number 32 bits. Especially M3 will definitely play. Is there some RF communication by using NRF24 etc.? One doesn't want neighbors to fiddle with their radio-controlled things, right? Therefore, Cortex M is cool enough to run some "real" cryptocurrencies. That said, stripping the "tweetnacl" lib for public key encryption (!!!) will reduce the code to 3KiB, while the use of RAM is very small, leaving enough space for other things.
8) Unlike PIC and AVR, Cortex M is von Neumann in terms of developers and Harvard in terms of hardware, both. There is no RETLW as annoying as PIC, and there is no silly data and program memory separation like on AVR. You can run the code from RAM if needed, which is convenient if you want to update its custom bootloader, etc., for example. Or store the static data in flash memory without special processing, the compiler only needs to know the area and who is the RO/RW.
The only good thing about Atmel is Arduino, which is easy for beginners. But a bit annoying, the limited 8-bit operation is really bad mathematically, if any result processing is required, they will get very slow code, which is also great. Say, it is mentioned that striping tweetlnacl is 3KiB code in ARM, but 12KiB in AVR. Wow
The code is your hammer. Real peripherals will set you free.
Where is the hacker? Oh, there are clips. .: D: D
Please be kind and respectful to help make the comment section great. (
)
The site uses Akismet to reduce spam.
.
By using our website and services, you expressly agree to our placement of performance, functionality and advertising cookies.
We are fools for small projects. Put anything into a small enough package, and you may get our attention. Make these things small and useful, for example
, This excites me. It is a switch-mode power supply that takes up the same space as a traditional linear regulator.
It is true that the heavy work of [Kevin Hubbard]'s small step-down converter is done by the PAM2305 DC-DC step-down converter chip, which requires only a few supporting components. But the engineering staff [Kevin] put in it and squeezed everything onto the 9mm PCB waste board on the side, which is impressive. The largest passive component on the board is the inductor in 0805. Everything else is in 0603, so if you decide to use SMD welding technology, you can test it. Check the video after the break to understand the speed of the manual welding process.
The total bill of materials includes
With only one or two bucks running, the end result is that the power supply has a stable 750mA output that can handle a 1-A surge in 5 seconds. We want to know if there is a small radiator tab that might not help? With some black epoxy potting agent, at least TO-220 can look more perfect.
[Kevin]’s Black Mesa Labs has a history of developing interesting projects,
to
. We look forward to everything that follows-assuming we can see it.
Do you need a small DC/DC 3W switch to reduce 5V to 3V in the 7805 TO-220 pin? Or maybe just want to learn surface mount solder 0603 components. Check out this $3 OSH project from Black Mesa Labs. $ 0.60 PCBs from
, BOM from $2
.
-Kevin Hubbard (@bml_khubbard)
Man, I still feel weird when people say 0603 parts are difficult to weld... Then, I remember I manually installed some smaller components to make a living! Great little project to love these things: D
Well, if you have hand-eye coordination, they are actually not difficult. I occasionally solder (despite EE), but when I solder, 0603 is not that difficult. 0402s is the real tongue.
Most of our recent board of directors are 0201.
But hope you don’t have to solder by hand. I think 0603 is quite easy, 0402 is feasible, but it is best to observe under a good microscope, and 0201 is almost invisible on the finished PCB. I can solder them under the microscope with good tools.
The real question is whether there is a solder mask. Of course, hands must be firm, eyesight must be good, and optical gain must be appropriate.
If there is no solder mask, 0603 will be close to the trace width of most homemade PCBs, and the solder will flow around and the joint will become ugly.
I don't like 0402, but mainly because many are not even marked.
Sharp tweezers help a lot.
With this information, the IC can be changed to an IC with different input and output restrictions. Looking at Digikey, the TSOT-23-5 package suitable for this product has a maximum input voltage of 5.5v and a maximum output voltage of 5v.
Without changing the small board, there is no 9v battery to 5v Arduino battery, but it is still very useful.
If there is enough interest, I will study the changes in a wide voltage range. I am watching this part
. My requirement is that there is no external diode and no small 0805 chip inductors (2.2uH-4.7uH). This Rohm BD9C301FJ meets these two requirements and has a wide input range from 5V to 18V and an output range from 1V to 12V. The surface looks good. The price of the SOIC8 package is $1.70.
+1
almost finished. I want to add another resistor and possibly a ground ring at the top. View pictures here:
This is a 3A link with a wide input and output voltage range. It has not been established, but those who are interested can use it as soon as possible.
Why is it so cheap to get a CUI V7803W up to 72V? Or can the cheaper Murata OKI withstand voltages up to 35V? This is a good project worth learning, but for the actual circuit structure, there are already excellent dcdc converters that can replace 220-size linear devices...
Price and size are two outstanding advantages. CUI is a very bulky thing-if you are lucky, it can be installed at 780x (plus it only works with 500mA, which will cost you $10 in the first quarter). Murata OKI-78SR-3.3/1.5-W36-C is more compact and cheaper than CUI at only $4.30, but both components suffer a significant voltage drop – CUI is 9-72V in, and Murata is 7-36 . The CUI part is very useful for using a 9V battery as an input (or it will definitely be useful after the battery is exhausted), if you want to step down the 5V power supply to 3V3, then flattening *neither applies* (for example, because you There is already a 5V rail, but it needs a little 3V3 love).
If you are designing a product for production, the switch mode circuit will definitely become part of the PCB design, rather than being implemented as a replacement for TO220. However, when prototyping the design, it is very convenient to experiment with the power supply as a plug-in module, so if these are the parts you want to use in PCB integrated switch mode, why not use it for the prototype power supply?
So, what is the white paper used for welding?
"It happened too fast, officer, I'm not sure what happened!"
The white paper is actually the tape that holds the SMD components.
Thank you!
In order to get more power, you need a larger inductor, a larger heat sink will not help.
correct. But please also note that the inductor must have a very low DCR to get good efficiency. Otherwise, for this voltage range and taking into account the noise problem, it is impossible to discern the advantage over a decent LDO linear regulator.
What is the point of this interesting little project? If you need to be small, I will choose one of the complete modules that can provide the same output power in a miniature package. For example, using the TPS82150 in a 3.0x 2.8×1.5mm package, 3V to 17V input range and 1A continuous output current.
As for other DCDC converters in the TO220 package, there are already a large number of manufacturers to choose from, and a large number of manufacturers have poured into the market. Therefore, if you do not have very special requirements, you can perform some tests on the ready-made products. (If you only need one item, or only a few items, please remember to request samples;)
I have been using these, ages from 7 to 36 volts input 3.3 out.
This is a $0.50 test board to evaluate the switcher solution for my Spartan7 FPGA design (
). The final design is a 4-layer $60 fab, so I want to make sure that the switcher design is reasonable and not a risky project for FPGA design. Afterwards, I just thought that this is an interesting OSH gadget kit project for people who want to solder their own power supply for $2 (compared to $5 for off-the-shelf equipment).
Okay, that makes sense. But why would you pay $5 for a ready-made product? Take CUI VXO7803-500 as an example. 2.4 $ If you order individually from Mouser, you only need some input and output caps on the target board (which should be placed here anyway), plus a few cents.
The price of VXO7803-500 is really great. For my FPGA design, I stepped down 5V to 1V, and I cannot use this through-hole design. My TO-220 design is just a test board for making interesting DIY kits. The only advantage over off-the-shelf products is that the output voltage can be hard selected. 3V is just a common example.
I use TI TPS62160 switch mode IC for my design. It is suitable for voltages up to 17Vin, and can be programmed for various output voltages, 1A power, high switching frequency (using feedback resistors), and run to Vin = Vout. It has other functions such as EN and power good, which are very useful in sequential power configuration. Changing the value of one of the resistors in the BOM, I have 3V3 or 5V switching mode (or any of the many other voltages I don't use frequently).
At the beginning of this year, I laid out and manufactured many such devices in a TO220 PCB with a size of 8.24 x 12.53mm. The cost of each set of 3 PCBs is only $0.75 (in the PCB-other components are more). It is just for the tickets for the breadboard, or inserting the tickets in the slots that require linreg in other designs.
TI parts are certainly not the cheapest option, but I have them and all other support components (including wirewound SMD inductors) on hand. You stick to what you know and trust.
I neglected to mention that TI has some fixed output changes on this IC (3V3 and 5V), which slightly reduces the number of external parts. However, considering the cost of the IC, I prefer to use a pair of resistors to adjust a bunch of things (and I chose the resistor value so that one of them is 3V3 and 3V swamp standard 100K). 5V implementation).
TPS62150 and TP62160 are very similar parts, not typos.
I understand that something is designed and built for entertainment or education, but these versions have been available for many years and cost a lot of money.
Yes – Digikey’s fixed 3.3V drive is particularly good, only $4
. However, I cannot use the through-hole solution in the FPGA board design, so I designed my own single-sided surface mount only layout. The BML DC/DC "tool kit" is suitable for the DIY crowd and used as an education platform. After choosing the interesting 0603 welding, choose the resistance values of R1 and R2 and get any voltage between 1.0V and 5.0V.
The speed-up capacitor in the feedback network should bypass the high resistance in the voltage divider-not the bottom resistance. In the reference circuit, a capacitor is placed here to increase the gain above a certain frequency. In the current configuration, the feedback circuit attenuates high frequency components.
You are right. The 100pF capacitor cover is incorrectly placed on the PCB layout. When it should be connected to GND, connect it to Vout. Gerbers will update within 24 hours and upload new shared projects to OSH-Park. Thank you.
A 100pF capacitor is fixed. New shared items are available here
I am a little wary of the EMI compliance of this kind of thing, and did not even say that off-the-shelf parts can do the same thing for less money and have been EMI certified.
Then there is the demand of people, who keep inventing some axles, they named the wheels as hubs, which may be a good thing for many people, but in this case, the only "performance" is to compress the parts into a small On the parts. board.
For most of us, this low fruit and the taste that might be poisoned by EMI seems tasteless, but it looks juicy enough that [Dan] picked it here and refluxed it. ...That's great!
I have been using China's small pressure reduction board for some time. They can be adjusted by a small but easily adjustable adjustment tank. They have a very wide input and output range. They have an efficiency in the range of 90% at 5v to 3.3v and have a rated current of at least 1 ampere. Oh, they removed 10 of them for $10,
Before I got them, I thought about replacing these potentiometers with fixed resistors, but now after using them for some things, they are actually very good. If I were to talk about a car project or other items that were subject to strong vibrations, I might consider changing the pot, but I have not felt the need to do so.
Please be kind and respectful to help make the comment section great. (
)
The site uses Akismet to reduce spam.
.
By using our website and services, you expressly agree to our placement of performance, functionality and advertising cookies.
by
As long as ASRock sets the price correctly, the card can set a new standard for performance per dollar. The company obviously saved a lot of costs; now, it needs to pass on the savings to customers. The performance can meet our expectations, you will get a lot of display output, the dual-slot form factor is very suitable in most cases, and most PSUs do not need an eight-pin power connector. Of course, ASRock could have used a stronger radiator to reduce the pressure on the fan and make it spin so fast. But for now, Phantom Gaming X did not give us a reason to recommend the more expensive Radeon RX 580, which is a compliment.
AMD's Radeon RX 580 is old news at this point. But in a sense, this makes it an ideal business card for ASRock's debut as a graphics company. The platform is stable. The competition is established. And the stakes are relatively low. If you are not familiar with this Phantom Gaming X Radeon RX580 8G OC core GPU, please check our
. Or let us go back in time
, When the Ellesmere GPU was launched. Today’s presentation will draw on the valuable experience of the past two years and apply it to mainstream cards. The design of the card clearly considers cost savings.
The performance of the Radeon RX 580 is well known, so the success of the card depends on its relatively small cooling solution, whether the lack of a backplane and lack of eye-catching LEDs can push the price down enough to weaken the 580s. Since we came into contact with Phantom Gaming X before it landed on store shelves, we can only guess that ASRock wants to be on par with Sapphire’s Nitro+, Gigabyte’s Gaming 8G and PowerColor’s Red Dragon, the latter being the cheapest Radeon RX 580s . Unfortunately, American readers may have to wait for a while to discover: ASRock only sells products to South America and APEC countries.
The weight of only 598g tells us that ASRock provides a conservative cooling solution to cool AMD's Ellesmere GPU. Nevertheless, the distance from the slot bracket to the end of the fan shroud is 26.7 cm, which is a fairly long graphics card. The height of 10.5 cm and the width of 3.5 cm keep ASRock’s Phantom Gaming X within the true dual slot size.
Two 8.5 cm fans are located at the 8.7 cm opening. A total of 9 rotor blades are optimized for each fan to allow air to flow through the radiator, so they generate more static pressure than fans designed for turbulence.
ASRock saves some costs by not using a backplane. We think this is a wise decision. It doesn't do much for cooling, nor does it need to be stable, because the heat dissipation solution is very light.
Looking up from the bottom, we can see that ASRock uses a horizontal heat sink. This is our preference because it allows some hot air to escape from the slot holder. The alternative is to rotate the fins to face vertically, pushing the hot air down towards the motherboard and against the side of the chassis.
The eight-pin auxiliary power connector visible from the top is rotated 180 degrees to make it easier to handle. On higher-priced models, you might want ASRock’s logo to have LED backlighting. But this is not the case with Phantom Gaming X, and we are satisfied with it.
The open back allows warm air to drain into your box. Moreover, using a shorter PCB means that ASRock's heat sink slightly exceeds the height of the board.
The slot holder has five familiar outputs. In addition to a single DVI connector, you also get an HDMI 2.0 port (especially suitable for VR HMD) and three DisplayPort 1.4 ready interfaces. The ventilation holes on the board allow some hot air to pass through the horizontal heat sink and out of the chassis.
The following screenshot from GPU-Z conveys the maximum clock rate of the card. In fact, the power and temperature limitations of Phantom Gaming X mean that these frequencies are generally not sustainable.
We introduced our new test system and method
. If you want to know more about our general approach, please check it out. Since then, we have upgraded the CPU and cooling system to ensure that nothing will be as fast as this graphics card.
The hardware used in our laboratory includes:
Get immediate news, in-depth reviews and useful tips.
Thank you for registering Tom's hardware. You will receive a verification email shortly.
There is a problem. Please refresh the page and try again.
Tom's Hardware is part of an international media group and leading digital publisher Future US Inc.
.
©Future US, Inc. 15th Floor, 11 West 42nd Street, New York City, New York, New York 10036.
Recently, we have experienced some difficult times, management changes, layoffs and brain drain are decreasing, because many familiar faces have fled to green pastures. These situations are carried out in the context of increasing financial losses and thorny questions about the company’s future and direction.
Much of the turmoil can be traced back to a major, decisive event: the difficult and disappointing birth of a new CPU microarchitecture called Bulldozer. As technicians, we may have overestimated the role of technology in these issues. Nevertheless, Bulldozer is regarded by many as AMD's next great hope, and it is the first brand new x86 CPU architecture in ten years. When the FX processor not only cannot catch up with Intel's competition, but also cannot beat AMD's own predecessors in terms of performance and energy efficiency, some nasty consequences are inevitable.
Once the first chips came out, AMD's engineering task became clear: to improve the Bulldozer microarchitecture as much as possible. In addition to the FX processor, the company also announced a plan that includes a series of updates to its CPU cores in the next few years and promises to improve performance and efficiency. The first of these incremental updates is called "Piledriver" (Piledriver), this is a modest update, the update was first released on the market last spring.
. Now, about a year after the first FX chips came out, the improved FX processor based on the piledriver makes its debut more or less as planned. In other words, considering all aspects, this is a very positive signal.
The question now is whether it is enough. Are these CPUs enough to make room for the market in the fierce competition? You may be surprised by the answer.
The chip that caught our attention today is codenamed Vishera, and it is the direct successor to the silicon that powers the previous generation of FX processors (called Orochi). Vishera and Orochi share almost everything-both are manufactured on GlobalFoundries' 32nm SOI manufacturing process, both have 8MB of L3 cache, and are basically eight-core CPUs. The biggest difference is the transition from the core of the bulldozer to the core of the pile driver, or more precisely, the transition from the bulldozer module to the pile driver module. These "modules" are the basic structure in AMD's latest architecture. They contain two "tightly coupled" integer cores that share certain resources, including the front end, L2 cache and floating point unit. Therefore, AMD counts the four-module FX processor as an eight-core CPU, and we cannot completely object to this label.
Products Show
Module
Cache size
(Nano)
Transistor
(million)
Area
(Square millimeter)
We introduced the enhancements of the pile driver module in more detail
, But the point is very simple. The pile driver includes fine adjustments to each part of the module to improve instruction throughput. Variations range from the front end of the CPU to the core, to the cache subsystem, and none of the changes can increase throughput by more than 1%. Overall, the income may be only about 6% or even lower, so we have not seen huge progress. Nevertheless, the pile driver also includes other modifications. FPU supports the three-operand version of the fusion multiplication and addition instruction, which is a key part of the AVX specification, and Intel’s upcoming Haswell chip will also support it. This change puts AMD and Intel on the same page. (At least until now, the support for Bulldozer's FMA4 instructions is retained.) More importantly, Piledriver has been optimized to achieve higher clock speeds at lower voltages. This adjustment is for mobile Trinity chips. Brought a good return. As you can see, it also benefits desktop FX processors.
Clock speed
bridge
speed
The new FX chip lineup is detailed above. Today’s headline news is FX-8350, which is only one of the four new Vishera-based parts AMD has provided us. FX-8350 has the same power envelope (125W) and Turbo peak (4.2GHz) as the chip FX-8150 it replaces. The most significant difference is the basic clock. The nasal bleeding induction frequency of FX-8350 is 4GHz, which is higher than 3.6GHz of its predecessor.
The higher base frequency of the FX-8350 should improve performance, especially in multi-threaded workloads. However, if you are like me, you are looking at the 200MHz gap between the basic and Turbo peak clock speeds and wonder why it is not bigger. After all, the overall idea of these dynamic clocking schemes is to take advantage of the extra thermal margin provided when not all cores are busy. Vishera can turn off the power of inactive modules, leaving more space for modules that are still active. In this way, generally there may be higher voltages and frequencies within the same thermal envelope. There is a gap of 500MHz between the basic clock and the peak clock of FX-8320. Why doesn't FX-8350 provide a similar increase in peak clock frequency?
Our best guess is that few of these chips can withstand frequencies above 4.2GHz well and consistently at a low enough voltage to allow AMD to mass-produce products with higher Turbo peaks. If so, it would be a shame, because the low performance in light-threaded workloads can be said to be the biggest weakness of the CPU architecture. Higher Turbo frequency can play a great role in solving this problem.
In other words, the price of FX-8350 is quite good. The price of 195 dollars positions it among several Intel Ivy Bridge-based products, the Core i5-3470 is priced at 185 dollars, and the Core i5-3570K is priced at 225 dollars. Both are true quad-core, four-thread processors. Of these two, only the i5-3570K's multiplier has been unlocked for easy overclocking, and all FX components have been unlocked. On the other hand, the Intel processor’s rated peak power is 77W, which is much lower than the 125W TDP of the FX-8350.
Speaking of smaller power envelopes, these two low-end FX models take advantage of the power enhancement features of the pile driver by reducing it to a more moderate 95W. The chips they replaced FX-6200 and FX-4170 were both 125W parts. The new model even sacrificed some clock speed to achieve the goal. For example, the FX-6300 has a clock frequency of 3.5/4.1GHz, while the earlier FX-6200 has a clock frequency of 3.8/4.1GHz. AMD told us that it expects the performance of these two parts to be similar, because the increase in performance per clock of the stub driver should make up for some of the differences.
The lowest-end FX processor FX-4300 almost completely overlaps with our A10-5800K desktop Trinity
Earlier this month. Both are priced at $ 122. The Turbo peak of 5800K is increased by 200MHz, the maximum power consumption is increased by 5 watts, and graphics functions are integrated. The FX-4300 has the 4MB L3 cache that Trinity lacks. Again, the A-series APU integrates PCIe connections and plugs into its own brand new slot, while the new FX series uses the same Socket AM3+ infrastructure as the previous model, so they are actually for different platforms.
We ran each test at least three times and reported the median score produced.
The configuration of the test system is as follows:
Feihong II X4 980
Feihong II X6 1100T
AMD FX-4170
AMD FX-6200
AMD
FX-8150
FX-8350
Core i3-3225
core
i5-2400
Core i5-2500K
i7-2600K
Core i5-3470
Core i5-3570K
Core i7-3770K
Core i7-3820
Version
DDR3 SDRAM
revenge
driver
iRST 11.1.0.1006
RSTe 3.0.0.3020
SB950 / ALC889 and
Realtek 6.0.1.6602 driver
Z77/ALC898 and
X79 / ALC892 and
AMD A10-5800K
Core i5-760
Core i7-875K
P7P55D-E Pro
A75 / ALC889 and
P55 / VIA VT1828S and
Microsoft driver
They all share the following common elements:
(Only AMD system: KB2646060, KB2645594 patch)
Thanks to Corsair, XFX, Kingston, MSI, Asus, Gigabyte, Intel and AMD for helping us equip our test bench with some of the best hardware. Of course, I would also like to thank Intel and AMD for the processors.
We used the following versions of the test application:
Other notes about our testing methods:
The tests and methods we use are usually publicly available and repeatable. If you have questions about our approach, please click our
Talk to us about them.
These comprehensive tests are designed to measure specific properties of the system, and ultimately may not be able to track all the connections with actual application performance. However, they can still inspire people.
One of the alleged adjustments of the batch driver is an improved hardware prefetcher, which can fill the L2 cache by checking access patterns and predicting which data is needed next. Any changes made by AMD in this regard will not be shown in our Stream results, where FX-8350 matches FX-8150 almost exactly. Many Intel chips extract more bandwidth from the same dual-channel DDR3 memory configuration. Core i7-3820 and 3960X have four channels, and the transmission rate has almost doubled.
The test is multi-threaded, so it can capture the bandwidth of all caches on all cores at the same time. The different test block sizes allow us to move from L1 and L2 caches to L3 and main memory.
Although the FX-8350 has a higher cache throughput than the FX-8150, we can attribute the difference to the 8350's higher base clock frequency. We may see the impact of the larger L1 cache TLB of the piledriver on the 32KB block size, but it is difficult to determine.
SiSoft has a good choice
The incubation period test tool is suitable for those interested. We use "random in page" access mode to reduce the prefetcher's impact on our measurements. We have reported results in terms of CPU cycles, and this is how the tool returns results. As with latency measurements in the past, the problem with converting these results to a billionth of a second is that we don’t always know the clock speed of the CPU, it depends on the Turbo response. In any case, knowing the delay of the clock cycle helps to understand, for example, the difference between a bulldozer and a pile driver. Let's imagine.
On a per-cycle basis, the piledriver's memory subsystem seems to be faster than Bulldozer's. In fact, the FX-8350's buffer is slower at every step of the ladder.
There is no proper SPEC rate test in our kit yet (yet!), but I want to quickly check some comprehensive calculation benchmarks to understand the comparison between different architectures before proceeding with more diverse and powerful applications based on Work load. These simple tests in AIDA64 are well multithreaded and use the latest instructions, including Bulldozer's XOP in the CPU Hash test and FPU in the Julia test and FMA4 in the Mandel test.
The FX-8350 ranks among the best in the CPU hash test, which is not surprising considering the relatively powerful performance of AMD processors in this integer-based benchmark. The fractal test with higher FPU intensity is completely different, with chips based on Sandy and Ivy Bridge topping the list. Although in theory, Vishera's four FPUs should be able to have the same number of peak FLOPS per clock as any Sandy or Ivy quad-core, but even with the advantage of higher clock speeds, the throughput of the FX-8350 will be lower. Much. With FMA instructions and a 4GHz base clock, at least the four FPUs of the FX-8350 can outperform the six older FPUs on the Phenom II X6 1100T, a feat that FX-8150 cannot replicate.
The workload of this test is based on the commands extracted directly in the x264 benchmark, using x264 to encode the video, as you will see later. This encoding job is a two-pass process. The first step is light multithreading, which will give us the opportunity to understand the power consumption when using mechanisms such as Turbo and core power gating. The second pass is more extensive multithreading.
We have tested all CPUs with default configurations, including a discrete Radeon card. We also popped up discrete cards to understand the power consumption of A10, Core i3 and A8-3850.
The original graph above gives us a good understanding of several things, including the huge gap between the maximum power consumption of AMD and Intel solutions in the same price range.
Please note that the Core i5-3570K actually consumes the same power in the light-threaded phase of the first thread of the encoding process and the multi-threaded heavy-load phase of the second thread. Presumably, this means that the CPU makes full use of its specified power range in both stages. FX-8150 is not far from ideal. FX-8350 consumes much more power in the second stage than the first stage. This shows that FX-8350 has a relatively conservative 4.2GHz Turbo frequency, leaving a certain amount of cooling space for the desktop.
FX-8350 is a very large chip with a lot of room for heat dissipation, so these results are not surprising. Since FX-8150, the basic parameters have not changed. The test system is based on the closest competitor Core i5-3470. Compared with our FX-8350 test bench, the power consumption at idle is reduced by more than 20W, and the power consumption under load is reduced by more than 100W.
We can quantify the efficiency by looking at the power (in kilojoules) used during the entire test period (when the chip is busy and idle). In this way, FX-8350 is an improvement to FX-8150 because it completes the work and enters the idle state faster.
Perhaps our best measure of CPU power efficiency is task energy: the energy consumed when encoding video. This measure rewards the CPU to complete the work as soon as possible, but does not consider the power consumption when idle.
Although the 125W processor may not necessarily be considered energy efficient, the FX-8350 requires less energy than any AMD processor before it. Compared with the Bulldozer-based FX-8150, this is a pretty solid advancement, especially because Vishera is just adjusting silicon based on the same basic architecture and the same 32nm SOI wafer manufacturing process.
Again, Intel's competition is much more efficient overall, not only the 22nm Ivy Bridge components, but also 32nm Sandy Bridge chips.
For game testing, we are using a testing method for latency. If you are not familiar with our work, you may want to check our recent work
The article, here is a part of the data, and a good explanation of our method.
As can be seen from the figure, FX-8350 has been improved on the basis of FX-8510 and Phenom II X6. More frames were generated during the test run, and fewer and shorter delays were generated during its run. Peak. (For frame time graphs of all tested CPUs, please go to
)
Although the average FPS of the FX-8350 is the highest among all AMD processors we have tested, the Phenom II X4 980 is still far ahead in the latency-centric indicator (99th percentile frame time). Either way, the FX-8350 is one of AMD's fastest gaming chips, but compared to recent Intel processors, you can easily see the problem with this statement. Even the low-end Pentium G2120 is faster in this regard
testing scenarios.
We suspect that the trouble with the Bulldozer architecture in gaming comes down to the relatively low per-thread performance in light-threaded workloads. In many games, a single branch control thread tends to become a performance limiter. The frame waiting time of FX-8150 rises in the last 5% or so of frames, which is difficult to achieve. The FX-8350 hasn't really changed this dynamic—the last 5% peak is still there—but its frame time is much shorter. This improvement is enough to make the FX-8350 slightly ahead of the Phenom II X6 1100T in the last few percentage points. That is progress. Unfortunately, AMD has a long way to go to catch up with Intel’s current processors.
Before anyone panics in this delay-sensitive game test, we all hope to base our analysis on reality by considering the time spent on truly long wait time frames. Once this is done, some practical issues regarding the performance of the FX-8350 will disappear. Almost no processor takes any time to process frames longer than 50 milliseconds, which is what we usually call the "bad" threshold. This means that you are using most of these CPUs (including FX-8350) to view reasonably smooth animations. In fact, we have to cross the regular next stop, which is 33 milliseconds or 30 FPS, and reduce it to 16.7 milliseconds (equivalent to 60 FPS) in order to see meaningful differences between CPUs.
When we traverse the cityscape of Arkham, the game engine must often broadcast live in new areas, and this arduous task is partially limited by the CPU. You can see peaks in all frame time graphs that occur at semi-regular intervals, and you will notice that peaks tend to be shorter on faster processors.
The FPS average is consistent with our 99th percentile frame time indicator: in a difficult test full of deceleration, the FX-8350 beat the champion of the green team Phenom II X4 980. The result of the 99th percentile illustrates this point: FX-8350 delivered 99% of the frames in 25 milliseconds in this test, which is equivalent to 40 FPS.
The waiting time curve of the FX-8350 also looks pretty good, with a smooth and not too large slope upward in the last few percent of the frame.
There were occasional spikes throughout the test, which meant that all CPUs took a little time to exceed our 50 millisecond threshold, but the FX-8350 only consumed 70 milliseconds on long-latency frames during our entire 90-second test period. time. This was only a short-lived failure, and it was less than half of the time it took FX-8150 to exceed the same threshold. Nevertheless, competing solutions such as Core i5-3470 and i5-3570K have almost eliminated these obstacles.
This is a game that runs well on almost all the CPUs we tested, with one notable exception: the Pentium G2120, the only processor in this group with only two physical cores
Two logical threads. The rest have at least four threads through hyperthreading.
FX-8350 performs well in this well-threaded game engine, especially in the indicator we pay more attention to latency. In fact, the FX-8350 took the least CPU time, exceeding our ultra-strict threshold of 16.7 milliseconds.
Note the peak at the beginning of the test run; it occurs on each CPU. You can feel knots while playing. Obviously, the game is loading some data for the area we are going to enter. Faster CPUs tend to reduce the size of the peak.
This is more signs of life in the AMD camp. The FX-8350 is on par with Intel's competitor Core i5-3470 in terms of average FPS, while the 99th percentile frame time of the FX-8350 is only longer.
The difference in the delay curve from FX-8150 to FX-8350 illustrates AMD's progress. The FX-8150 struggled for about a quarter of the frame, with latency rising to close to 20 milliseconds, while the FX-8350 did not reach 20 milliseconds until the last 4% or so of the frame was rendered. Once it reaches the last 1% or so of a truly solid frame, the FX-8350 can rival Intel's competitors.
As can be seen from the figure, a large spike at the beginning of the test run affects the faster CPUs spending more than the 50 millisecond threshold almost all the time. Compared with competing Intel products, we spent about 50% of the time we waited for a frame to complete on the FX-8350.
Many readers over the years suggested that some kind of real-time multitasking test would be a good benchmark for multi-core CPUs. Facts have proved that this goal is difficult to achieve, but we believe that delay-oriented game testing methods can help us achieve this goal. All we do is play some
, And use the same settings as our previous game test for a 60-second tour on Whiterun. In the background, we use Windows Live Movie Maker to convert the video from MPEG2 to H.264. This is our quality
Experience when coding.
Well, I think this is good news or bad news. On the positive side, in this test scenario, the FX-8350 outperforms any previous AMD CPUs and runs fairly smoothly
Experience in encoding video in the background. On the downside, compared with Intel's quad-core competing products, the eight-core FX-8350 does not provide an excellent multitasking experience. Even the Core i5-760, which has two generations of history, is faster.
It will run the benchmark test in two ways, one is to use the graphics card to draw everything on the screen like in the game, and the other is to use it completely in software without worrying about rendering (as pure CPU performance test).
Either way, the FX-8350 can be as real as we have seen in other game tests: absolutely speaking, it is quite fast, can easily improve the performance of previous AMD chips, and has a long The road to catch up to Sandy Bridge, let alone the Ivy.
Another continuing request from readers is to add some kind of code compilation benchmark. With the help of our local developer Bruno Ferreira, we finally completed this test. Qtbench tests the time required to compile the QT SDK using the GCC compiler. This is Bruno's note on how to put it together:
QT SDK 2010.05-Windows, compiled through the MinGW port of GCC 4.4.0 attached.
Even when the Linux version obviously works and supports multi-threaded compilation, due to the trouble of some batch files, the Windows version must be cracked to achieve the same function.
After obtaining effective multithreaded compilation (configurable number of simultaneous jobs), it is time to reduce compilation time from 45m+ to a manageable level. This requires a rigorous cracking of the makefile in order to simplify the build into a more streamlined version, which is best still able to be compiled before the hell freezes.
Then, in order to make the test more flexible on the path it is on, some modifications are needed. This leads to more Makefile processing (poor thing).
The number of jobs scheduled by the Qtbench script is configurable, and the compiler performs some multi-threading by itself, so we conducted some calibration tests to determine the optimal number of jobs per CPU.
TrueCrypt supports acceleration through Intel's AES-NI instructions, so, especially on CPUs that support those instructions, the encoding of the AES algorithm should be particularly fast. We also include the results of another algorithm Twofish, which cannot be accelerated by dedicated instructions.
what. Now that we have surpassed the game test, we will be in a more friendly position for FX-8350. These eight integer cores can play an important role in most of the above tests. Therefore, the FX-8350 is not only matched with the Core i5-3570K, but also with the more expensive Core i7-3770K. SunSpider is the only exception to this trend, probably because not all elements in it are extensively multithreaded.
Deal with increasingly popular image processing tasks: combine multiple images to create wide panoramas. This task may require a lot of memory and may require a lot of calculations, so Panorama Factory has a wide range of multi-threaded 64-bit versions. I asked it to add four pictures (8 megapixels each) to the glorious panorama of the interior of the damage laboratory.
In the past, we have aggregated the time spent on all the different elements of the panorama creation wizard and reported that number as well as
. However, this is very data intensive, and the process is often controlled by a long operation (stitching). Therefore, we just decided to report stitch time, which saves us a lot of work, but is still at the core of the problem.
picCOLOR was created by Dr. Reinert HGMüller (
. This is not Photoshop; the image analysis function of picCOLOR can be used for scientific applications such as particle flow analysis. Dr. Müller has provided a new revision of our program for a period of time, and optimized picCOLOR to optimize new advances in CPU technology, including SSE extensions, multi-core and hyper-threading. Many of its functions are multi-threaded.
At our request, Dr. Müller graciously agreed to reset his picCOLOR benchmark to include some practical use cases. As a result, we now have four tests that use picCOLOR for image analysis: particle image velocimetry, real-time object tracking, barcode search, and label recognition and rotation. For the sake of brevity, we provide an overall score for those real-world tests.
This benchmark tests one of the most popular H.264 video encoders, the open source x264. The result is divided into two parts, and the encoder traverses the video file twice. I choose to report them separately because it is usually the way to report the results in the public database of the benchmark results.
In this test, we used Windows Live Movie Maker to convert a 30-minute TV program recorded in 720p .wtv format on the Windows 7 Media Center system into a 320×240 WMV format video format suitable for mobile devices.
In each of the above tests, FX-8350 continues to show higher performance than FX-8150, but these image-centric applications are more challenging. Only in the second pass of the x264 test, the FX-8350 can match or surpass its closest competitor Intel.
Since LuxMark uses OpenCL, we can also use it to test the performance of GPU and CPU, and even compare the performance of different processor types. Since OpenCL code is inherently parallel and relies on a real-time compiler, it should adapt well to new instructions. For example, Intel and AMD provide OpenCL integrated client drivers on x86 processors, and they both claim to support AVX. The AMD APP driver even supports the unique commands FMA4 and XOP for bulldozers and pile drivers.
We will start with the CPU-only results. These results come from the AMD APP driver for OpenCL, as it tends to be faster on Intel and AMD CPUs.
Now, we will understand the performance of Radeon HD 7950 when driven by each CPU.
Finally, we can combine the computing power of CPU and GPU to see if we can use both processor types at the same time to solve the same problem, thereby improving performance.
When asked to solve the problem completely by AMD APP ICD, FX-8350 definitely outperformed Core i5-3570K. Only the most recent Intel CPUs with hyperthreading and four (or more) cores are faster. However, Radeon is clearly more proficient in this task than any CPU, and like most processors, FX-8350 is best to provide only Radeon data instead of trying to calculate.
The Cinebench benchmark test is based on Maxon's Cinema 4D rendering engine. It is multi-threaded and comes with 64-bit executable files. The test runs only on a single thread, and then runs as many threads as the CPU core (or threads in a CPU with multiple hardware threads per core).
Facts have proved that the FX-8150 is not inferior in these rendering applications, and the FX-8350's generous benefits on its predecessor products make it top the list, comparable to the Hyper-Threading Intel quad-core.
MyriMatch is suitable for proteomics or
. You can read more about it
.
Euler3D solves the problem of simulating fluid dynamics. Like MyriMatch, it tends to take up a lot of memory bandwidth. You can read more about it
The performance in these two scientific computing workloads used to be very closely tracked together (believe it or not), and seems to be mainly limited by memory bandwidth. Over time, the performance results of these two workloads vary depending on the CPU architecture.
All AMD FX processors are unlocked, so in theory, overclocking them is as easy as increasing the multiplier. I usually prefer to use the BIOS (errors, firmware) to overclock the CPU instead of using the various Windows programs there. However, recently, I like the simplicity and speed of AMD's Overdrive utility and its ability to control Turbo Core behavior very precisely. Therefore, when I need to overclock the FX-8350, I decided to use Overdrive. I'm not sure this is the correct choice, but this is what I use.
When overclocking the CPU to 125W, you will need proper heat dissipation. AMD recommends the large FX water cooler we used before
, But I am very lazy, I think the Thermaltake Frio OCK already installed on the CPU should be enough. After all, the size of the radiator is as large as the size of the water cooler, and the rated power consumption of the radiator is up to 240W. In addition, I assure you that there is a lot of space (more than an inch of gap) between the CPU fan and the video card, even though the space in the image above does not seem to be large. It turns out that Frio OCK keeps the CPU temperature in the mid range of 50°C even at full tilt, so I think it does its job well enough.
The trouble is, I did not get the result I hoped. As always, I recorded my attempts under various settings and copied my notes below. I tested the stability using the multi-threaded Prime95 torture test. Please note that I have adopted a very simple method, only increasing the voltage of the CPU itself, without increasing the VRM or any other voltage. Perhaps this is the reason why my attempt was made like this:
4.8GHz, 1.475V-restart
4.7GHz, 1.4875V-locked
4.6GHz, 1.525V-multithreading error
4.6GHz, 1.5375V-about 55C temperature error
4.6GHZ, 1.5375V, turbo fan-stable at a temperature of about 53.5C, finally locked
4.6GHZ, 1.5375V, manual fan, 100% duty cycle at 50°C-locked
4.6GHZ, 1.55V, manual fan, 100% duty cycle at 50°C-crash, temperature ~54.6C
4.4GHz, 1.55V-OK
4.5GHz, 1.55V-normal, ~57C, 305W
4.5GHz, 1.475V-error
4.5GHz, 1.525V-error
4.5GHz, 1.5375V-OK, ~56C
At the end of this process, I could only squeeze 500MHz from the FX-8350 with a voltage of 1.5375V, which was a notch lower than the maximum voltage exposed in the Overdrive utility. AMD told the reviewers to expect a frequency close to 5GHz, so obviously I failed, or the specific chip is not very cooperative.
I disabled Turbo Core in my initial overclocking attempt, but once a stable base clock is established, I can get higher speeds by creating a Turbo Core profile, which can reach 4.8GHz at 1.55V . This is how our pair of benchmarks run on the overclocked FX-8350.
Some other considerations. First of all, please remember that we measured the peak power consumption of the 196W clock FX-8350 system under x264 encoding. The peak value of the overclocking and overvoltage configuration tested above is about 262W, which is much higher than the normal configuration. As you might imagine, when dealing with this kind of heat, our Frio OCK was thrown out like Joe Biden in the vice president debate.
Second, I hope to include
Test to see how the FX-8350's gaming performance can be improved by a higher clock frequency, but when I tested it, our overclocking configuration was not completely stable. The game did not crash, but our characters run around from time to time. (Here, I tried my best to refuse to make another Biden reference.) We must spend more time on the FX-8350 to find the best overclocking configuration.
As you may have collected, FX-8350 can easily improve its Bulldozer-based precursor products, and this product is neither a chip reduction nor a new architecture.
The final verdict of FX-8350 is not difficult to render, but it does contain several moving parts. As usual, our value dispersion map will help us solve key issues. I created a few of them for your viewing pleasure. The first shows the overall performance (geometric average) of our entire CPU test suite, except for the comprehensive benchmark on page 3. Our game testing is part of this overall performance indicator. The second scatter chart itself isolates the game performance, and we convert the 99th percentile frame time result centered on the delay to FPS for easy reading. In both graphs, the best value will be near the upper left corner where the price is low and the performance is high.
The overall performance dispersion provides some good news for AMD fans: FX-8350 beats Core i5-3470 and 3570K in our beautiful multi-threaded test suite. Therefore, FX-8350 will provide you with a higher cost performance than Core i5-3570K, and at least comparable to our favorite Intel Core i5-3470.
However, jumping to the game scatter chart, the screen will change dramatically. There, the FX-8350 is the AMD desktop processor with the highest gaming performance so far, eventually overthrowing the ancient Phenom II X4980. However, the gaming performance of the FX-8350 almost exactly matches the Core i3-3225 (priced at $134) Ivy Bridge based on the processor. At the same time, Core i5-3470 provides excellent gaming performance with less money than FX-8350. FX-8350 is not exactly
Used in video games-in our tests, its performance is generally acceptable. But compared with competitors, it is relatively weak.
Of course, this strange difference between the two performance screens is not limited to games. The FX-8350 is also important in image processing applications, SunSpider, and in the less extensive multi-threaded part of our video encoding tests. Many of these situations rely on one or more threads, and in this case, the FX-8350 is affected compared to recent Intel chips. Nevertheless, the contrast between FX-8350 and Sandy/Ivy Bridge chips is not as strong as the old FX processors. The IPC gain of the piling machine and the 4GHz base clock have made our objections stand out.
Another major consideration is power consumption. In fact, the FX-8350 is not even the same type of product as the Ivy Bridge Core i5 processor in this area. There is a 48W gap between the TDP ratings of the Core i5 components and the FX-8350, but in our tests, the actual difference between the wall sockets under load of two similarly configured systems is more than 100W. This gap is large enough to force potential buyers to think deeply about the types of power supplies, chassis and CPU coolers required for his construction. It is certainly possible to save cheaper components for the Core i5 system.
This is probably why AMD provided some incentives to buy the FX-8350, including a very generous $195 price tag and an unlocked multiplier. If you are willing to tolerate more heat and noise from the system, if you are not particularly worried about occasional failures or slower speeds during the game, then, if what you really want is to bring maximum multithreading performance to your dollars... …Then, FX-8350 may just be your next CPU. I cannot say that I will go there in person. Over time, I became too picky about heat and noise, and gaming performance is very important to me.
Nevertheless, with the FX-8350, AMD has once again adopted a formula that has been loved by PC enthusiasts time and time again: at the best price below $200, the performance per dollar is higher than others. This is progress that we can recognize.
I don't know how you can perform OC operation on the FX 8350, but I have successfully operated the 1.350vw/full 8-core 8350 (1244 batch) to 4.8GHz. And it runs at 4.70GHz w/1.40vw/8 core and 277amHz bus speed. When using H100 water cooler, the temperature will never exceed 59C.
1.5 + v? That's a big voltage, no need, thank you. I ran extensive Prime95 and Intel Burn tests for at least 1 hour, and there were no errors under these settings.
Your mileage may always vary when overclocking.
I want to see this test redone in WINDOWS 8! The optimization of Win 8 can make better use of AMD modules (supposedly).
This one! Yes! Sparta!
(Number of comments, um, 300)
I just retired a four-year-old i7-940 from the office, and I am considering bringing a home to upgrade my old Core2 box (I decided not to do this because it is impossible to track 1366 motherboards in mATX these days).
Since the i7-940 is comparable to the old i7-875K used in these benchmarks in most respects, it really makes you should not play games on AMD. Their best efforts today still cannot keep up with an obsolete processor that has been outdated for four years. To make matters worse, the 940 is not even the fastest Intel chip in 2008, it is more cost-effective than the 965.
Year 2008? My years passed quickly...
I don't even think Nehalem is too old-it is still only a step higher than the 2500k in the current desktop, and it can be done quickly with just a few screws.
In fact, the fact is that the CPU is no longer important in the game field. Everything has to do with the GPU.
This has been this way since the Core 2 era.
the reason is simple. The game largely depends on single-threaded performance, and single-threaded performance is mainly related to clock speed. Since Core 2, the clock speed has not improved much.
I think you need to play some games... Given its overclocking limit, it would be fast enough without Core 2.
This assertion was once correct, but even now the frame-time graph of TR is true. Average frame rate? of course. Smooth frame rate? It is unlikely to bleed.
What are you smoking? On Core 2 / Athlon 64 era processors, no game is "unplayable". Of course, the epenis score will be lower, but if these chips drive a good GPU, they can still provide a smooth gaming experience.
Newer CPUs produce higher FPS scores (peak), which is why their average FPS scores are higher.
However, the CPU has very little impact on the minimum FPS score (which is actually important) because it all depends on your GPU.
the reason is simple. Most games are single-threaded, while "multi-threaded" games are dual-threaded, which means that a quad-core or higher chip is not helpful at all. This is why the CPU architecture of multithreaded applications is lagging behind (Bulldozer/Pilediver). Clockspeed/IPC is still king here. Since Nehalem, there hasn't been much progress here.
Starting from BF3. Try to use Athlon64 or Core 2 to keep the frame time below 16.7ms. No, I'm not talking about benchmarks run by TR.
In fact, the whole reason why I consider the i7-940 home console as a "free" upgrade is because my Q9550 can no longer crack current games, and Q9550 represents the high-end product of the Core2 series.
At first, I wanted to know if it was GTX460, so I inserted 7950 into the test box, but no, the game still couldn't run due to the drop in frame rate. The GTX460 is hardly a modern graphics card, but I run most games at 720p resolution because I sit 10 feet away from the screen, which is enough.
It sounds like a configuration problem or the monitor refresh rate is too low.
The Q9550 + 460 combination cannot be "unplayable" in a 1280×720 modern game (most games only use two threads).
Since the GPU itself is limited by the CPU, the 7950 does not help at 1280x720p resolution. If you throw a 1920×1080 or 1920×1200 game at a healthy AA/AF dose, you will only see an improvement of more than 460.
If you try to run the game at a resolution of 2560×1600, I will find that this is a problem.
To be honest, I just used the parts purchased from The Bargain Basement to build a computer for my girlfriend. C2Q Q9550, 4 GB DDR2-1066, my old Radeon HD 4850... It plays most modern games at 1920×1080 on high settings. Not "highest", but high. Can't say that I am very upset.
Did not knock on the old parts, but at the same time, there are "bleeding edge" games that benefit from Sandy/Ivy's higher clock and higher IPC, so that nothing else (AMD or Intel) can provide the same level of gaming performance .
BS. The HD 4850 is too slow to play Borderlands at a high setting of 1600×1200. It is too slow to play Metro 2033 near any high pitch. Fall into a Crysis under high settings. Etc., etc. I usually don't go that far, but I would say this: In this case, your point of view is wrong.
I kind of like Flip.
Yes, my C2Q [i
Quite a lot-single-threaded performance is not there yet, and you can't reliably overclock high enough 65nm and 45nm parts to make up for the difference. This is the only reason why I run 2500k and pass Q9550.
Yes, games still need good single-threaded IPC.
Of course, some games make full use of multiple cores, but in general, if the running cores are slow, they will always be limited by the core engine functions. This is highlighted when running a multi-core program in the perfmon.exe window – four cores are used, but only one core is fixed at 100% – all other programs running on other cores are obviously [
It is usually the rendering engine that puts the most stress on the cores, because only one core can provide data to the GPU at a time.
It may not happen, but I want to know if AMD/Intel/ARM is working on asymmetric multi-core processors:
Suppose there is a 9-core processor with 8 fewer cores (for example, the cores in the new Silvermont Atom architecture), and a single main core, which represents the best single-threaded IPC possible, for example, the clock frequency is 5GHz ivy bridge module.
I bet that in today's software market, even high-threaded things are always bottlenecked by a kernel, which will do well.
According to this kind of thinking, until you meet the generation of your game definition, no CPU has been produced. Don't get me wrong, I like 60 uninterrupted continuous delivery of frames per second, as many as the next man. It's "unplayable" is...completely wrong. She and I played Borderlands 2 on the same machine (though, of course, she is currently running at 1280×1024 on a 17-inch CRT monitor).
I am not saying that this is the best. But I am arguing that it is "not playable". You can play games like people with limited budgets who use Pentium G870 or Athlon II X4 to build gaming systems. Some of my friends have built systems using Radeon HD 5770 1 GB + Athlon II X4, and given their market segments, these systems perform very well in running most modern games. Similarly, not [i
I have a TV game box equipped with Phenon II X4 3.4 and 1360×768 GTX 560 Ti. Almost every game runs at 60 fps. Crysis 2 certainly requires object details to be removed from Ultra, because of its ridiculous subdivision.
But I have played some games and they run better on a 4.3 GHz Core i5 on a desktop. Hard reset may get into trouble due to the physical effects of many actions. Of course, you can reduce the physical configuration by a notch. Moreover, SupCom FA's massively multiplayer game is very heavy because the simulation thread is located on one core.
I think Krogoth is stuck in 2007.
No, it's more like developers are using ~2005-2006 hardware as title encoding titles.
No, they are not. You appear as if all developers are posting the same nonsense, but that is not the case. Some developers broke the boundaries, and then other developers launched games that were not demanding.
[URL
42 watts of medium power, 136 watts of load
That's the 94 watt delta, it seems to match the 100 watt TDP.
It makes sense if they use tools that only load the CPU and not the RAM
It seems that the legitimacy is 100%.
TR power
64 watt midpoint, 196 watt load (x264)
That's 132 watts of electricity used. My guess is that ram and IO are under pressure and account for additional wattage, but 32 watts? ! ? ! ? What could be wrong on the TR stool? ! ?
Techpowerup only measures power from the 8-pin ATX connector of the PSU, while TR measures [i
Check the graph again. Techpowerup did it all.
Complete system and 8-pin power load.
The figures I quoted are for full power loads.
A CPU of only 136 watts is wrong and expensive.
AMD cannot sell 100 TDP parts that use actual 140 watts of power. (And they didn't)
Still a bit fishy. I can't see the source of 32w in the TR test.
Can't there be so many ddr3 under load, and should the SSD be the smallest?
Maybe it should be South Bridge, because they tested the video conversion and should put pressure on the I/O controller.
Raise the player. What is the maximum temperature of the chip? I thought I had read about 70 or 72 degrees Celsius, but in [url = http://products.amd.com/zh-CN/DesktopCPUDetail.aspx? id = 809 & f1 = AMD + FX + 8-Core + Black + version & f2 = & f3 = & f4 = 1024 & f5 = AM3% 2b & f6 = & f7 = 32nm & f8 = 125 + W & f9 = 5200 & f10 = False & f11 = False & f12 = True
This is still not enough to attract any Intel customers.
Why is this so? It is worse than any quad-core intel.
Their performance is about 30% lower than Intel, and if the performance remains unchanged, the power consumption is about 50% lower.
cinema
Fax-8350 6.94 $ 195
i7-3770k 7.54 $ 329
A 30% increase in speed means that i7's score should be higher than 9. It is obviously much slower.
Unless you compare it with the LGA2011 6-core CPU that costs $1,000?
If you actually look at the Intel CPU that costs around $230, then the i5-3570k (6.03) is indeed beneficial to AMD.
Therefore, you can get higher performance for less money for fx-8350.
50% at full load is correct.
However, AMD can hardly switch to 22nm Trigate "soon".
The price of a 1100 ton is 150 dollars... and at 4GHz, I get 7.xx in cinebench
No, but at least some customers can be retained.
I have to say that I have always felt that benchmarks are seriously overweight for multithreading performance. Or, in other words, this does not mean that multi-threaded performance is not important: single-threaded performance is seriously insufficient. I have an X4 955 at home and an i5-2500 at work. Even if I overclock the X4-955 to X4-980 speed, the difference in the single-threaded solution is huge. Using Autodesk Revit, interaction with building models becomes smoother on i5-2500*. All geometric figures displayed on the screen are processed by the CPU, not by the GPU, except for very specific content. Therefore, the CPU is very different in the responsiveness of the model. And they are all single-threaded. There are about 8 multithreading functions in Revit. This is not because Autodesk is lazy, but because most of these things are single-threaded in nature. I think this is the case with most software.
I don't know how to adjust, unless I find a really valuable set of single-threaded benchmarks and use it as another "button" in the final scatter chart.
Does anyone have any suggestions for some single-threaded benchmarks?
Scott, it is worth your time to contact Autodesk to see if they can provide recommended benchmarks for any software?
Edit: This is a list of multithreading in Revit:
[URL
Interesting observation. I would describe it like this: some people usually multitask. It is not always decisive, but it cannot be ignored. For example, I often run videos on one monitor (live streaming of Netflix or C-span, etc.), while certain web pages (such as the homepage of NYTimes or YahooNews (this is pig) are automatically refreshed on another monitor) display , And then I can even analyze chess (3 cores) and perform other operations on the main display. I never want to go back to dual-core, wait. Therefore, it is very interesting to me to rely on "multitasking" testing in some way.
Edit: In light of jesend's comment below, I changed the first word from "great" to "fun"
I want to know whether the multitasking workbench will essentially defeat the purpose of trying to achieve the clearest single-threaded performance.
I don't understand what you mean.
Whether you are looking at multi-threaded testing or single-threaded testing, the gap between X4-955 and i5-2500 is almost the same. They have the same number of cores. Yes, X4 is much slower. You will get this result when comparing AMD processors from April 2009 to Intel processors from January 2011 for the same market point. So?
Your complaint about niche workload performance has nothing to do with whether TR's benchmark suite represents overall performance. Unless [url = http://en.wikipedia.org/wiki/NC_%28complexity%29
I don't want to make this personal. If my post is an attempt to insert Revit into the benchmark suite in particular, I can assure you that this is not my agenda-Revit is just for me, and when it comes to benchmarks, I asked the crowd for suggestions. If you disagree that single-threaded performance is important, then I hope you will just state your views this way, instead of trying to harm me personally by suggesting that I only want to test my own software. TR uses LuxMark and MyriMatch and other such workbenches. The number of users they represent is statistically insignificant, but they can still effectively illustrate how the CPU responds to different types of loads. Many of TR’s benchmark suites may be used by less than 1 in 2000 computer users, so this may not be the deciding factor.
In order to represent my situation, the staff of TR and other sites have obviously begun to focus on multi-threaded benchmarks. Nowadays, even video games are often multi-threaded. Very good, very good and understandable, as I said, I don't want to imply in any way that multi-threaded workloads are not important, they are very important. I suggest that the single-threaded load is under-represented. It would be great if it can be resolved in the benchmark suite and broken down in the final scatter plot.
Assuming you disagree with me on this point, I feel a little safe, but I don’t know if you disagree, because you think I want TR to run my own personal benchmark (which is not the case), or because you think the thread The load is not important.
Some single-threaded/light-threaded benchmarks are listed. For example x.264 pass 1 and Sunspider. In essence, it can only be single threaded or threaded. Of course 3770k will win, but the cost is much higher, and 8350 crush is where there are more threads in step 2. The main focus of these 8 core CPUs is thread-intensive workloads, but other threads are also possible, so I find multi-threaded benchmarks important. Of course, this is only one application, other applications may be different, but in the future, there will be more and more threads, and single-threaded applications are fast enough on the current CPU, which is usually not a big problem.
The 8350's win rate is less than 7%, which is much lower than the 3770K's victory over the 8350 in other high-thread applications (such as Euler or picColor).
Even if it is multi-threaded in the future, the 3770K can match the 8350's 8+ threads, and it will be faster on 7 threads or less. This advantage increases to more than 50% in 4 threads.
[Quote
", if you want to take some time, please try to upgrade the 650 MB file from one version of Revit to another version-I have been busy for 1-1/2 hours now and occupy about 8 GB on the i5-2500 The RAM machine, still trembling. It's all on one thread. I don't know how long the FX 8350 will take."
Are you sure that the workload is limited by the processor? It sounds like memory/io limits me.
To be sure, I haven't touched memory or disk, but it's not 100% sure. I have 24 GB of RAM, no page file (so I assume this means no disk space) and SSD. The memory usage for this process is approximately 8 GB. I opened the task manager, the task manager is always fixed at 25% (that is, single-processor core), and occasionally peaks occur when I do other things. But this is a good question.
Given that AMD is about to disappear from the world, this conclusion seems to be trying to find a way to recognize CPU. I understand the reason why you feel the need, but mistakenly stated that in all key areas, there is a reason to waste so much power and heat, while performance is not. It is meaningless.
AMD needs to realize that the world was not 5-10 years ago. We want our chips to consume power. People who buy high-performance chips now most want to play games, because almost any chip can bring more benefits to most consumers. Even today, the multitasking of most software is limited to four cores.
For most people, there is no convincing argument for bulldozers or their pile driver family. AMD should abandon the entire production line, cancel Intel's use after Netburst, and restore its old architecture as soon as possible while optimizing the Pentium M style.
Not that this can replace the complete TR test, but HardOCP did a limited benchmark test, in which the FX-8150, 8350 and i7 2600K and 3770K clocks all reached 4Ghz. The interesting results show what is the actual IPC advantage of the pile driver compared to the bulldozer: [url
On page 1 of the comment, there is the following slide:
I don't know how accurate the picture is, but there is a stupid question: if they give up the L3 cache, is there not enough space for another 2 modules (4 cores) and another L2 cache to serve them? Or if they do, there is not enough space for the core?
edit
I just suddenly realized the red color in the FX logo, it means the heat from the CPU.
I assume you are talking about being able to install two or more dual-core modules in the same 315mm2 area in exchange for L3, and the answer may be no. Maybe there will be a module. More than 315 square millimeters, yes, two modules or more will be applicable.
Your proposition is viral marketing, here is the HFR result, from time to time
And use actual software and games made by real men instead of Intel’s fools....
Goodness I predicted fast, let's weigh it.
That guy is a troll hunter and has revealed his real ugly face
In front of this thread
Hey, idiot. I’m going to take TR’s reviews more seriously than some random French websites you dug... Do you need to spend a whole day looking for reviews that put your beloved AMD in the best condition?
Stop caring too much about the "truth" like a neutral person. I have seen that your release history has passed, and your despair to support AMD at all costs is becoming more and more annoying. This is your memory journey: [URL
Calm down, the troll will kill...
If you think that intel 6C / 12T is enough to fight 16C opteron, then you are worse...
As for Intel Xeon, we all know why they look "better" under windows, only suckers
Still talking and talking like talking.
You sound like a troll much bigger than Chuckula. And all the insults...? Not very elegant
Bigger, no, I think your record is better than mine.
Get there, even though there seems to be more water in your wine.
these days...
As for the insult, I hope you are smart enough
The timing of this situation...
Burn him to death! So we can prove whether he is a witch
In this case, I don't think the suction cup will float.
Sampling... …. ………………. …
Yes, I am definitely not floating. Therefore, I may not be a small stone, a church, or wood.
[b
Browse on the page...These graphics are under the memory stick....
No need to understand French, the numbers speak for themselves.
What is the difference between the red bar and the green bar?
The red bar is the application, the green bar is the game, which is obvious
Because there is a name on the left
The two upper bars labeled "moyenne" (mean) are averages,
Red represents "moyenne applis", represents green; green represents
A "moyenne jeux" is the average of the game.
For applications, the average proportion of games is 7.7%, which is 13.5%.
This is a comparison performed at a fixed 4Ghz frequency, so the comparison is performed clock by clock.
Why all the hostility? …….. The HardOCP article and your link complement each other, and they both show that the new AMD adjustments have made the previous generation of products obtain considerable benefits.
hardOCP also mentioned that the load of the new kernel is 15 watts less than the previous generation, which is also good.
I have both AMD and Intel platforms, and it is likely that I will replace the old three-core CPU with one of these CPUs. When the price adjusts to a stable post-launch level, I want to get 6xxx within a month or so...I can even choose 8xxx.
The hostility you are talking about comes from an early exchange in this thread,
At the bottom of the page.
The guys insulted me and spread my false information, so I let him taste his own spell in this book.
Chuckula’s AMDZone comment is unnecessary, but I think the relationship between the two of you is a bit too much-not only here, but here.
Can't we all get along?
Okay, I found it... Thanks.
Go to page 4 and 5
I agree that 2/3 is the correct rule of thumb.
10% of the clock comes from 5% of the architecture.
Check the TR number, it seems 15% correct.
Awesome guy, the way to get in there and overclock the chip. Don't worry about CPU overclocking again next time, most other websites seem to reach 5GHz frequency when underwater 1.45-1.5v, which is ridiculous.
Ridiculous comment.
Let me be clear: Do you think the review is absurd because of the limited overclocking?
From my point of view, Scott wrote an interesting balanced review that focused on the performance characteristics of PD and BD and their competitive advantages.
Your name is misleading-it should say "6cores" or "12threads"
Lol! I nord my coffee on that cup of coffee.... Very nice
The anger made him confused...This is a trouble.
You are still angry because your previous signal was forbidden.
They have important value in understanding the heat dissipation requirements of new chips. Although they do not reach 5GHz, the distance is close enough, so the difference is not big.
Talking about the topic...If AMD chooses more hyper-threading design, it will not be so difficult to execute on a single thread, and the concept of modules is retained. Example: A module has 2 cores, but one of the cores has a hard wire connected to the second core (matching core) such as fma, and it also has a smart enough decoder to issue two operations to the main core when needed , So as to maintain high performance on a single thread, but automatically balance under full thread load. This will require each module to use one but very fat decoder (currently single and very fat, but not what I think), alu/fma sharing (now only on fma).
Is the roller a steam roller with 2 decoders per module?
Yes, with the help of Steamroller, AMD is migrating each module to two decoders.
My cost formula is as follows: my 970 AM3+ motherboard and old power supply can easily handle 8350, while the old $100 phenom II is still ok, but if in some new situations, I need higher What about the CPU speed? Like 50%, CPU throughput has increased by 60%. In my case, an 8350 can do it.
So if I finally want to increase the power, and considering the 8350's power supply cost over time, it will look like this:
If I get an 8350 plug-in and burn an extra 80 or 100 watts when maximizing the cpu (compared to using new parts to build a more energy-efficient device) for example 4 hours a week, it will cost 6 per week An additional or close to $5 electricity cost per year. Or worse: I need to double or triple the usage time of max-cpu, or even 12 hours/hour per week, which will cost 10-15 USD/year extra compared to Intel rigs Electricity bill.
However, this is an embedded upgrade.
Therefore, at some point, when the price is right, the rig will eventually have one.
I really want to see a series of power consumption figures at different clock frequencies. The technical papers I read about the resonant clock grid in the piledriver architecture seem to cover up the shortcomings of the resonant clock grid in overclocking.
Basically, AMD uses inductors on its chip to create an oscillator that consumes less power than a typical clock distribution network. They use certain elements of a typical clock distribution network to set the exact frequency at which the oscillator operates. The problem is, if your oscillator is designed for 3 GHz and you are running at 4 GHz, the power savings from the clock grid is very small.
If you have time Scott, I would be very interested in seeing the relationship between frequency and power consumption when the voltage remains constant. In a typical chip, this relationship should be linear, but this may not be the case for pile drivers. I am curious whether AMD chooses to run at the best power consumption frequency, or whether users can increase or decrease the clock frequency on this chip to achieve higher efficiency.
Since capacitors are inherent in the clock net itself (ie fixed), I think they are using multi-tap inductors to adjust the circuit. Just provide multiple drivers at different positions of the inductor, and then switch them in/out as needed to adjust the resonance frequency.
It’s like an old school [url = http://en.wikipedia.org/wiki/Crystal_radio
If I remember correctly, the inductors have some tapping points, but they are actually AC grounded (large capacitance).
I’m not sure if I can move effectively [i
This is a brief introduction to the resonant clock grid (1 pages):
Thanks for the link. Something interesting.
Foreign exchange is competitive, that is, it is good enough in most cases.
But Jesus, that power attracted. In some cases, power is doubled and performance is increased by half. terrible
Give it a positive spin... Look at it this way: winter heating costs will fall.
What if you don't live in a cold country? Cooling costs will rise.
To use liquid cooling, pass the wires through the wall, and put the radiator outside. Or, pump heat into a hot water storage tank and use water to make coffee. Guide internal engineers and find innovative solutions.
Edit: Typo
If you want to clearly understand the electricity bill, you must pay attention to the purpose and use of the cpu.
For Deep Fritz chess analysis (for example, see X-bit Labs review), I clearly know what the performance of the 8350 is and can compare it for my own needs: chess analysis. (8350 is very close to i7 3770 at this point; see X position on page 4)
5 hours of chess analysis every week * additional consumption of 90 watts * 16 cents / kWh = about 5 US dollars in additional electricity bills per year.
Therefore, the electricity bill for a year is $5. It doesn't matter whether the exact number is $3.89 or $7.40, the conclusion is the same: For Am3+ drilling rigs that require higher speeds than its old x3 or Athlon II x4, a very cost-effective embedded upgrade.
This is an interesting way of observation. In terms of money, increased power consumption is not a factor. Compared to some of the other disadvantages of having such a hot-running CPU, $5 per year is insignificant. In the hot summer or when used as a bedroom computer, the increase in heat and related noise will be a huge burden.
good idea. My office has always been air-conditioned low-flow vents. Even with only 3 monitors, my office will get warmer, and on warm days, my equipment is idle. However, even reducing the total power to 100 watts will not change much. but. . . I tend to seldom play chess analysis in the hot summer. In any case, this is a cold thing.
So, should I buy some AMD stocks? How much can it be reduced?
Well, it will not be negative. Therefore, the answer is obviously zero.
Now this is the response I want to see
"Bureaucrat Conrad, you are technically correct-[i
Value is not important, just follow the chart patterns and ride trends.
The "price action" strategy would be something like this: According to their stock price, check whether it has reached the hypothetical "lowest price" twice. This may indicate that it "does not want to go lower", this is the time you should invest, and [i
Damn
I read what you wrote before, and you are right. Investing in price behavior is not an investment in the vast majority of people, but a gambling. Unless you have solid evidence/analysis/premonition that a company can improve its fundamentals and/or add value to its shareholders, or at least increase its book value, it will beat the market with price-based securities in the short term. V
Before connecting the amplifier directly to the power supply, the quiescent current of the output stage needs to be set. As a precaution, we recommend connecting two 47Ω/5 W power resistors in series with the PCB power supply connection (positive and negative). If something goes wrong (a short circuit occurs somewhere), the amplifier itself will not be damaged, but in the worst case, the two power resistors will burn out. It is better to use a regulated power supply, but who can use a dual power supply that can be set to around +/- 56 V. Before turning on the power, turn P1 fully counterclockwise. The current consumption in the positive power cord should be approximately 30 mA (with activated output relay). The screw terminal K7 should be connected to the power transformer. Connect an ammeter in series (used with a power resistor), and then slowly turn P1 clockwise until the current increases by 30 mA, and the reading is 60 mA. This low setting is more than sufficient. If the heat sink temperature increases, the quiescent current will also increase slightly, but it should be kept below 90 mA. At high power output, the temperature of the junction of the two power transistors T4 and T5 will rise much more than the temperature of the heat sink, so the VBE multiplier T1 will not be able to fully compensate for this. The quiescent current will instantly rise to several hundred mA, but will drop when the power is reduced and the radiator cools again. In this case, the amplifier has good additional functions. You can say that the A-level setting of the output stage increases with the output power provided.
Please enter your email address. Instructions for resetting your password will now be emailed to you.
Already have an Elektor ID account?
.