I hate to further complicate things, but . Anyone remember the old Bob Carver amps that put out hellacious power in a tiny rack mount chassis? This was before the class D amplifier days. How did he do this? Regulated variable tracking power supplies. The power supply adjusts its rail voltage rapidly in sync with the incoming signal. This kept power supply switching losses to a minimum.
Here is some info off the internetwebthing"
1) Magnetic Field Power Amp = a fancy (maybe even copyrited) name just to stand out among other amps and dazzle (baffle) the uninformed consumer.
2) The only thing related to a magetic field is the use of a TRIAC plus additional circuitry to modulate the primary of the power transformer to limit power/secondary voltage. It doesn't appear to be very well regulated and only seems to "stiffen" the AC line under heavy loads. Not real fancy, not real effective, and can be very noisy due to the low frequency AC lines and spikes caused by high voltage switching.
3) All Carvers I've seen run 'Class G' (rail switching) circuitry in their output stage. They run low/med/high Vcc rails with the 'Class A' drivers running on the highest rails and the additional output devices connected to the higher rails by fast recovery diodes. The entire output stage is controlled by a single op-amp through global feedback. This gives the benifit of lower idle power thus the need for smaller heatsinks, along with high peak-power. The best example of this is their famous little cube amp rated at 200WPC with virtually no heatsink.
Today's pint sized amps use this same technology. Mix that with a multi-stage class D output stage and your overall losses are reduced to a bare minimum. Really cool stuff. It can't quite beat classic amp topologies in sound quality quite yet. But, you would be hard pressed to detect a 0.01% distortion vs 0.1%. The reduction in size is worth the trade-off to me in my daily driver.
Ge0