Frequently Asked (and Answered) Questions about the Frontier Grid Platform


Q: What is Frontier?

A: Parabon's Frontier® Grid Platform is a software solution that enables users to harness computational capacity on both dedicated and "idle" resources to run extreme-scale grid computing applications.

For more information, please visit our Frontier Grid Platform page.

Q: Does Frontier run over the Internet or within the Enterprise?

A: Both!

Parabon's Frontier software was developed to run as an online utility computing service: harnessing the idle capacity of thousands and thousands of computers all across the Internet and delivering that as a "pay-as-you-go" high-performance computing (HPC) service. We call this the Parabon Computation Grid.

In order to make such a solution viable, however, a lot of work has gone into creating a secure, reliable, and easy-to-use platform — one that can withstand the hostile environment of the public Internet!

A number of customers, however — whether for confidentiality or performance reasons — aren't interested in tapping into the online grid. Instead, they prefer to leverage the existing computational capacity of their own enterprise infrastructure, from notebooks and desktops to servers and mainframes on their internal networks.

For these customers, we offer the Frontier Enterprise grid server software, which enables organizations to run their own private Frontier grids, harnessing computing capacity within their existing networks and datacenters, without having to buy additional hardware.

Finally, we also offer a hybrid solution, the Virtual Private Grid (VPG). A Frontier VPG is managed by the public Parabon Computation Grid server, but executes tasks exclusively on private, in-house computing resources. This provides researchers unlimited access to their own PCs; unlike the online Parabon Computation Grid, usage is a fixed fee, rather than by the cap-hour (Ch).

Q: How long have you been around?

A: Parabon was founded over a decade ago in 1999.

We released the Frontier Grid Platform software a year later, in 2000, making it the first commercially available (COTS) grid computing solution. Ten years later, our software has remained the most secure, reliable, and easy-to-use grid computing platform on the market.

Q: Who are your customers?

A: Frontier provides game-changing benefits to customers across a wide variety of market sectors: commercial enterprises, government agencies, academic institutions and non-profit organizations.

For more information, please visit our Who Uses Frontier page.


Q: What types of applications are well-suited to run on the grid?

A: Traditionally, grid computing has been used to tackle a wide variety of data-heavy, compute-intensive, and distributed command and control applications, such as modeling and simulation, data mining, predictive modeling, evolutionary optimization, and large-scale load and performance testing.

In addition, Frontier's Integrated VM Management allows the scheduler to programmatically provision entire operating system (OS) images across the grid — effectively turning it into a dynamic cloud computing platform — and, being able to encapsulate entire OS environments, including third-party libraries and application dependencies, allows users to deploy fully-functioning cloud services en masse across ordinary Windows, Linux or Mac workstations.

For more information, please visit our What It's Used For page.

Q: I don't understand the payment model. What's the difference between "Flex" usage and "Reserved" time, etc.? And why "Capacity" - why not Gigaflops or Compute-Hours?

A: Developing a rate schedule is non-trivial. There are a number of factors which determine how much WORK a job performs, and the POWER it takes to complete it in a given period of time.

To address these concerns, Parabon has developed a reasonably straightforward approach to measuring the amount of computation a job has used:

  • We start by calculating the relative "Capacity" of a given node.
    • This is done by running a benchmark (or suite of benchmarks) across the entire grid.
    • Faster computers complete the benchmark more quickly than slower ones, so we use the reciprocal of the time to determine the machine's Absolute Capacity.
    • We then calculate the average Absolute Capacity of all engines on the grid.
    • For each engine, we then compare its Absolute Capacity with the aforementioned average Capacity rating, which gives us that machine's Relative Capacity (or simply "C").
    • Therefore, 1C represents the capacity of an "average" node.
  • How much you pay is a function of the Capacity of all the nodes working on your job at any one time.
    • So if you were running at 1000C, that could mean one thousand "average" computers, five hundred "fast" machines, or two thousand "slow" systems. In any arbitrarily long time period, the amount of WORK performed will be the same.
    • The point is, regardless of the ACTUAL speed of the machines, you're charged a NORMALIZED rate, dependent on how much WORK was performed.
  • Arbitrary measurements like Gigaflops or Compute-Hours don't take into account real-world factors such as the variance of CPUs, or the patterns of availability resulting from actual people using their desktops.
  • Also, it's hard to know when grid users will want to launch jobs, so we have to keep nodes on-hand 24x7 to provide for unpredictable demand. To combat this, we encourage users to "Reserve" time on the grid; that way we can obtain computation during pre-selected time frames.
    • Example: A user in New York wants to run a job every night from 02:00-04:00AM EST. We can make a deal with a university in Madrid, Spain to run on their computers from 08:00-10:00AM Central Europe Time.
    • Conversely, researchers in California who need access during the day might end up running on a batch of computers in Australia which aren't being used overnight.
    • Either way, reserving computation ahead of time allows us to approach sellers in the Capacity Market, and secure the necessary resources at the best negotiated price.

Q: I'm NOT a programmer. Can I still harness the power of Computation on Demand?

A: Absolutely!

One of the major advantages of the Frontier Grid Platform is its ability to provide Grid Software as a Service (GSaaS) applications; third-party developers are encouraged to offer their applications in our App Store. These products are applicable across a variety of problem domains — from financial forecasting, to chemical modeling and simulation, to sensor placement optimization, and more.


Q: Do I have to write my grid applications in a specific programming language? (Java, C++, etc.)

A: You are free to use whatever development tools you wish to develop your grid applications.

In the past, Frontier relied on the Java Virtual Machine's "sandbox" to protect providers' computers from potentially harmful tasks. Nowadays, thanks to the Integrated VM Manager introduced in Frontier 5, the Frontier Compute Engine can safely execute third-party executables in a virtual operating system, isolated from the rest of the computer.

By encapsulating the operating system environment, users can also ensure that all the necessary pre-requisite dependencies are available. Thanks to this, Frontier supports not only traditional Java and C++ executables, but also Fortran, Matlab, Perl, R, and a rich assortment of other programming languages.

Q: How hard is it to develop an application to run on the grid?

A: Like any API (Application Programming Interface) the Frontier Grid Platform requires learning a new set of conventions - understanding jobs and tasks, and learning the appropriate method calls, etc.

Having said that, however, we've developed the Frontier API to be as straightforward as possible. We provide ample documentation and example code, so getting up and running should be a snap. If you've ever struggled with deploying HPC applications across a cluster using other technologies, such as Condor or PVM, you should find our API very clean and concise.

In addition, Parabon regularly sponsors grid programming competitions at the college — and even high school — levels.

Q: I'm a programmer. How do I write my own applications to run on the grid?

A: It's easy... Simply Register For A Free Account, download the Software Development Kit (SDK) and follow the online tutorials.

Q: Can I run existing applications on Frontier?

A: Yes!

Our Frontier Rapids Integration and Execution Environment makes it easy to adapt your existing application to run on the Frontier Grid Platform.

Q: What's this I hear about SELLING my software on the grid?

A: Third-party developers can submit their applications for inclusion in our App Store. As users pay to use your software, we'll share those proceeds with you. It's our way of saying, "Thanks!" for helping us make Frontier the premier distributed computing platform, and encouraging the use of Computation on Demand®.

Q: What about inter-task communication?

A: By default, tasks run in isolation for security reasons: unless expressely allowed, the Frontier Compute Engine prevents grid tasks from accessing the host resources, including network, hard drive, and user programs.

In certain cases, however, users may request that their tasks be allowed to open TCP/IP connections to other sites — e.g., so-called "mashup" applications that download input data from a number of SOA resources. In those cases, developers can specify that their applications be permitted to access the network, and the Frontier Compute Engine will adjust the security policies accordingly.

Another solution, popular among evolutionary computing applications and employed by the Origin Evolutionary SDK, is for tasks to return interim task results back to the server, which are then used to launch subsequent tasks based on data collected from previous tasks' output.

Finally, some high-performance computing (HPC) frameworks such as MPI demand tightly-coupled node-to-node communication. Such applications can achieve this by registering each node's IP address with a directory service as the tasks are launched, then querying that directory to populate the necessary configuration files.

Because of the latency of the Internet, and the multiple layers of firewalls and NAT routing between computers, we don't recommend this approach online. Enterprise customers using hard-wired computers, or researchers who are scheduling to dedicated cluster hardware, on the other hand, are free to implement node-to-node communication in this manner.

Q: I have a lot of data to process. How do I handle it all?

A: The Frontier Compute Engine can pull data from distributed sources, such as a NAS or SOA stack. In addition, it can upload results to remote storage devices as well.

Also, Parabon has demonstrated interoperability between the Frontier Grid Platform and the Hadoop Distributed File System, allowing grid tasks to access data stored in an HDFS.


Q: What are the advantages of becoming a provider?

A: The Parabon Computation Grid depends on people like yourself running the Frontier Compute Engine to power the Frontier Grid Platform.

Much of our work directly benefits the Compute Against Cancer philanthropic initiative, which provides resources for Cancer researchers. From scientific analysts studying risk factors, to pharmaceutical companies developing a cure, there are numerous ways in which your computer can help fight Cancer and other illnesses.

In addition, institutional providers — universities and businesses who provide 50C or more — are eligible for financial compensation, meaning we'll BUY your excess computation. Just as mutual fund managers package up individual stocks, we can bundle together pockets of computational capacity and sell it en masse to power-hungry users around the world. In return, you can maximize your Return on Investment (ROI) AND put your existing resources to good use.

For more information, please visit our Capacity Market page.


Q: Caps and cap-hours seem like overly complicated metrics. Why don't you simply buy and sell plain 'ole CPU-hours like Sun Microsystems or large/medium/small virtual machine (VM) instance-hours like Amazon.com?

A: Recall the quote, "Everything should be made as simple as possible, but not simpler" by Albert Einstein. Metrics like CPU-hours and VM instances are meaningful only when applied to the simple case of completely homogenous resources. Because we aggregate capacity across a vast array of different computers and operating systems, we had to invent a means measuring and normalizing it across heterogeneous resources.

For a more detailed explanation of how we calculate Capacity, see the discussion of "Flex" vs. "Reserved" Time above.

Q: Why is the Parabon Capacity Index recalibrated periodically?

A: The PCI serves different audiences:

Audience Use
Customers A measure required for the calculation of the actual power applied and actual work performed for a customer
Providers A rating for expressing the relative capacity of a provider's resources
Traders An index around which contracts may be written and traded in both the wholesale and eventual futures markets for computation

The economics of computation is different from that of all other commodities in one important way: The quantity supplied and the quantity demanded are both increasing dramatically over time — to the tune of ~50% per year! We could have defined and permanently fixed some unit of computational power, akin to the definition of a watt of electric power, however, practical use of such would quickly become unwieldy and decreasingly useful for valuation purposes. For computation to be treated as a commodity, suppliers and providers need a practical means of quantifying and valuing it. Fundamentally, the price of computation is based on, among other things, the total cost of ownership of the computational resources that provide it. A computer "of the capacity you might buy today," is something to which we can all relate; by contrast, the computational capacity and inflation-adjusted price of a computer that is, say, five years old is hard to compare to contemporary computers. Thus, if the unit measure of computational capacity (and power) was permanently fixed to the capacity of some designated "standard," (i.e., the capacity of a specifically designated computer with designated bandwidth), the measure would quickly become unwieldy (thanks to the Laws of Moore and Gilder); more significantly, it would grow ever less useful for the purpose of comparison and valuation in the marketplace.

Q: Why do you use the Parabon Capacity Benchmark (PCB) instead of industry standard benchmarking products like those provided by the Standard Performance Evaluation Corporation (SPEC) or the Transaction Processing Performance Council (TPC)?

A: Unlike other benchmarks, the PCB assigns a capacity rating based on factors, besides raw computational performance, that affect an engine's potential contribution to a computational grid, e.g., availability, reliability, bandwidth and the memory allocated to its compute engine.

Q: Once determined, is the capacity rating of an engine constant?

A: No. First, the Basic Capability Index (BCI) of an individual engine is periodically recalibrated to account for its performance history, although over time the BCI for an engine does tend to equilibrate to a fixed value provided the influencing factors do not change dramatically. The Parabon Capacity Index of an engine, however, will decline over time. As ever-faster computers are added to the pool of available resources, the relative capacity of an engine naturally declines. As a corollary to Moore's Law, an engine with a PCI of 1.0 today is expected to have a PCI of 0.5 within 18-24 months.

Q: Why don't you accept quotes from individual providers who can supply a few computers, but not 50C of capacity?

A: In time, we will. For now, it is only cost effective for us to purchase from large institutional providers (e.g., universities, businesses and municipalities) that can provide large blocks of capacity.

Learn More about Computation on Demand®

Want to know about anything else?

Contact us. We're eager to help.