Co-processor Management with IBM Platform Co-Processor Harvesting
IBM® Platform™ Symphony Co-Processor Harvesting software extends IBM Platform Symphony, helping to drive new levels of business performance by distributing application workloads across Intel Xeon Phi CPU resources according to policy and unique application requirements. The software enables enterprises to harness idle computing resources to build a scalable, high-performance operating environment designed to meet critical service levels and cost structures.
Need to do more with less?
With today’s economic pressures, organizations like yours are looking for better ways to improve IT performance—without the big capital expense involved in expanding the infrastructure. If you’re running Intel Xeon Phi co-processors in your Platform Symphony grid environment, the solution may already be in front of you. Chances are that the co-processors are not used at full capacity all the time, and those unused cycles can be tapped to boost application performance.
Make optimum use of what you have
Designed to use the Intel Xeon Phi co-processor as a resource, Platform Symphony Co-Processor Harvesting helps you reduce capital costs by making optimal use of available infrastructure. Using the power of the Xeon Phi co-processor together with a scaled-out application environment, you can achieve dramatic increases in application performance while minimizing costs by sharing co-processors among multiple users, applications and lines of business. Application lifecycle management costs are contained because you can support multiple versions of applications and co-processors on the same shared infrastructure.
Keep up with demands
Platform Symphony Co-Processor Harvesting empowers IT to:
Scale for maximum computing output
With IBM Platform Symphony Co-Processor Harvesting, applications can scale across multiple hosts containing co-processors. Platform Symphony dispatches compute resources to support workloads without requiring one-to-one pairing between CPUs. The software can reallocate more than 1,000 compute engines per second to different workloads, depending on user-defined sharing policies and application priorities. This capability helps reduce wait times and improve application performance and utilization, while delivering more on-demand CPU power to applications that require it.