Podcast: Tim Carroll Joins Cycle Computing, Discusses HPC Cloud Future

… long-time HPC community member Tim Carroll talks about the future of high performance computing in the cloud. Before joining Cycle Computing, Carroll was at Dell, where he was focused on the HPC market segment.

News Release: Amid Accelerating Growth Cycle Computing Adds Industry Leaders Tim Carroll and Rob Futrick to Management Team

Celebrating a year of record-growth, Cycle Computing has added HPC executive and startup veteran Tim Carroll as Vice President of Business Development and Ecosystem; and has also promoted Rob Futrick to Chief Technical Officer (CTO). The buildout of the management team will leverage and accelerate rapid growth of revenue, new customers and core hours under management.

Video: Cycle Computing Brings Greater Access to HPC Resources at ISC’14

In this video from ISC’14, Brad Rutledge from Cycle Computing describes how the company brings greater access to supercomputing resources. By making it easy to spin up large clusters, the company is breaking records of scale using Cloud HPC.

Cycle Computing seems to be on a roll at the moment, as the company is hiring in multiple U.S. locations. And in related news, Cycle Computing announced today that Novartis is using Cycle Computing to discover new Cancer fighting drugs.

News Release: Schrödinger Partners with Cycle Computing to Accelerate Materials Simulation using the Cloud

Schrödinger, LLC and Cycle Computing, LLC announced today a partnership that will allow customers to run Schrödinger’s Materials Science Suite on the Cloud and elastic resources worldwide using Cycle Computing’s CycleCloud orchestration software.

Scientific Computing: Big Compute: The Collision of where HPC is Meeting the Challenges of Big Data 

There is a shift underway where researchers, engineers, and analysts, can change the very way they think about problems. Previously, we have been limited by the computing resources we have— the clusters we have on premise. Today, we can change the very way we ask our questions. Ask the right questions — and use the Cloud to create the size of system needed to answer your questions.

HPCwire: Scaling the Super Cloud

“The number one problem we face as humanity is getting people to think outside of the boxes they bought,” says Cycle Computing CEO, Jason Stowe. His company has made big waves and proven that the combination of Amazon servers and their own innovations can open new infrastructure options for users with HPC applications. 

HPCwire: HPC Lessons for the Wider Enterprise World

Jason Stowe, CEO of HPC cloud company, Cycle Computing put it best when he told us, “We in HPC pay attention to the fastest systems in the world: the fastest CPUs, interconnects, and benchmarks. From petaflops to petabytes, we [in HPC] publish and analyze these numbers unlike any other industry…While we’ll continue to measure things like LINPACK, utilization, and queue wait times, we’re now looking at things like Dollars per Unit Science, and Dollar per Simulation, which ironically, are lessons that has been learned from enterprise.”

News Release:  Cycle Computing Strengthens Management Team to Meet Growth Demands

Cycle Computing announced the addition of three executives, Gavan Corr as chief strategy officer, Robert Petrocchi as vice president of worldwide sales, and Brad Rutledge as vice president of marketing. This team will further help educate enterprises about the benefits and advantages of Utility HPC, grow the Cloud HPC ecosystem, and implement leading technology and processes to quickly and easily onboard Cycle Computing customers.

GigaOM: New startup economics: Why Amazon (web services) and Dropbox need each other

Amazon has been smart to focus on new markets, whether it is online storage providers such as Dropbox or HPC-in-the-cloud services such as Cycle Computing. They are the ones who are cloud-native and have created infrastructure demand that is many times the legacy companies.

 

The New York Times: IBM to announce more powerful Watson via the Internet

On Tuesday, a company appearing at the Amazon conference said it had run in 18 hours a project on Amazon’s cloud of computer servers that would have taken 264 years on a single server.

“It’s now $90 an hour to rent 10,000 computers,” the equivalent of a giant machine that would cost $4.4 million, said Jason Stowe, the chief executive of Cycle Computing, the company that did the Amazon supercomputing exercise, and whose clients include The Hartford, Novartis, and Johnson & Johnson. “Soon smart people will be renting a conference room to do some supercomputing.”

 

GigaOM: Cycle Computing once again showcases Amazon’s high

Cycle Computing, which divvies up workloads to run across AWS regions and zones, has been able to run and manage Schrödinger’s quantum chemistry software on a whopping 156,000 cores across 8 AWS regions.

 

The Register: Boffins look down back of Amazon Web Services, find a SUPERCOMPUTER

What runs faster than the majority of the world’s supercomputers, costs less, and was used to research organic solar-power cells? The answer is Megarun, a 1.21-petaflop super that was spun up by Cycle Computing in the AMAZON CLOUD.

 

Ars Technica: 18 hours, $33K, and 156314 cores: Amazon cloud HPC hits a “petaflop”

For the past few years, HPC software company Cycle Computing has been helping researchers harness the power of Amazon Web Services when they need serious computing power for short bursts of time. The company has completed its biggest Amazon cloud run yet, creating a cluster that ran for 18 hours, hitting 156,314 cores at its largest point and a theoretical peak speed of 1.21 petaflops.

 

CNET: Supercomputing simulation employs 156000 Amazon processor cores

Supercomputing, by definition, is never going to be cheap. But a company called Cycle Computing wants to make it more accessible by matching computing jobs with Amazon’s mammoth computing infrastructure.

 

Cycle Computing has a software management platform that controls the hundreds of thousands of virtual machines that are needed to run these types of jobs. Life science testing is a perfect fit for this software because of the massive amounts of options that are available to scientists to test a broad range of theories.

 

Podcast Silicon Angle -The CUBE: http://www.youtube.com/watch?v=maOLWbH1tyg&feature=<youtu.be

SiliconANGLE is a place where computer science meets social science.  Cycle Computing’s Jason Stowe was interviewed on a live video stream discussing the record-setting MegaRun.

 

Podcast – GigaOM: Dumb hardware, smart software’s the way to go, and geeking it up with Cycle Computing

In this week’s Structure Show, we talk through all the AWS Re:Invent news; the looming era of open-source switches, and how Cycle Computing helps a scientist build the solar panels of the future.

The cloud: High performance computing’s best hope?
OCTOBER 11, 2013ZDNetAt the recent ISC Cloud’ 13 conference, Jason Stowe, CEO of Cycle Computing, presented an interesting assessment of the growing needs many companies have for on-demand high performance computing.Cycle Computing believes that the easy availability of high performance computing — that is, the ability to address the largest and most complicated computing task by harnessing together the power of hundreds, or perhaps, thousands, of computers — will improve the capabilities of many companies that previously were not able to use high performance computing.Organizations that wish to use this approach need a large budget for hardware, software, power, networking and storage, as well as high levels of expertise on hand — unless they turn to offerings from cloud service providers.Read full article »

 

Bidding strategies? Arbitrage? AWS spot market is where computing and finance meet.
OCTOBER 8, 2013GigaOMAmazon last week launched a contest for companies to show their Spot Instance pricing strategies, with $5,000 in AWS credits going towards the best use cases and $3,000 in credits going to the runner up. But the second year of the contest is as good a time as any to look at the often-mysterious beast that is AWS Spot Instances.While not often used, they are an important element in Amazon’s bag of tricks as well as something that startups are using to save tens of thousands on certain workloads. I’ve spoken with several companies to understand the tips, tricks and strategies involved in playing the AWS spot market.Read full article »

 

Flexibility — for HPC, clouds, and the workforce.
OCTOBER 2, 2013iSGTWTo build an HPC cluster in house, or to access third party HPC resources through the cloud: that is the question. While it may not be quite as poetic as Hamlet, this is the conundrum with which many small-to-medium-sized enterprises and research institutes are faced. Organizations interested in conducting computationally expensive data analysis or carrying out complex simulations have to decide whether to build in-house HPC clusters, or take advantage of the availability of such clusters through cloud offerings. Both options have their relative pros and cons, but the message from last week’s ISC Cloud ’13 conference in Heidelberg, Germany, is that there is increasingly a very clear middle way.Termed ‘utility HPC’ by keynote speaker Jason Stowe, CEO of Cycle Computing, this middle way involves organizations owning in-house HPC resources of sufficient performance to cover their typical usage, but also supplementing this on an ad hoc basis with additional cloud-based HPC resources for particularly computationally expensive projects — in other words, HPC clouds to cover the peaks.Read full article »
Cloud HPC Firm Dares Scientists to Ask Big Questions
AUGUST 20, 2013HPCwireCloud-based supercomputing is, theoretically, a great idea, but the trend has not taken off as some in the HPC field believed it would. That isn’t stopping the folks at Cycle Computing, who say its Amazon-based supercomputers are not only helping scientists and researchers get real work done, but freeing their brains to ask the really big questions.Scientific creativity is being hamstrung by the finite resources of traditional fixed-size supercomputing infrastructures, Cycle Computing CEO Jason Stowe said in a recent video. While all kinds of advances are being made in the HPC arena–particularly on the software side–all too often, scientists and researchers can’t adequately explore their ideas or ask the big questions due to a sheer lack of HPC capacity.Read full article »

 

Still no end in sight for US banks’ parallel run
AUGUST 13, 2013Op Risk and RegulationSince last month’s announcement from the US Federal Reserve on Basel III implementation, US institutions finally have certainty about their future capital requirements. Banks, the Fed said, will be forced to hold common equity Tier I capital equivalent to 7% of their risk-weighted assets reflecting a minimum ratio of 4.5% and an additional capital conservation buffer of 2.5%. Banks will also face a higher leverage ratio of 4%, rather than the 3% minimum under Basel III, which the Fed will introduce on January 1, 2014.So while US banks now have a much clearer roadmap to execute their Basel III implementation programmes, they remain on parallel run for Basel II compliance. This means that they are still waiting to have their advanced measurement approaches (AMA) to operational risk approved.Read full article »

 

The Promise of Utility Supercomputing
AUGUST 9, 2013WiredImagine this: You’re a computational drug designer at a Big 10 Pharma studying the pathway for a cancer target. According to a GLOBOCAN/World Health Organization statistic from 2008, there are 12.6+ million new cases of cancer globally, and you’re in the trenches on this fight. Now, a cancer target is a protein that, much like a lock, has a pocket where molecules can fit, like keys, to either enhance or inhibit its function. The problem is, rather than the tens of keys on a normal keychain, you have tens of millions of molecules to check. Each one is computationally intensive to simulate, so in this case, we have approximately 340,000 hours of computation, or nearly 40 compute years, ahead of you.Now imagine you need to propose to your management that you run this workload, and to do so in a timely fashion you need about 10,600 servers of infrastructure. Chirag Dekate of IDC says this equates to a 14,400 square foot data center that would take a year to get up and running, at a total cost of $44 million when you factor in space, cooling, power, cabling, and the process of hiring the people with expertise to run it. It is safe to say this science would never happen.Read full article »
Cycle Computing and the HPC Experiment
JULY 15, 2013HPC in the CloudWith hardware advancing at a relatively stable (if still exponential) rate and datasets increasing at a much higher rate, parallelism is a main tenet of high performance computing today. That parallelism is difficult to attain in a cloud environment, as latencies there are typically higher, thus slowing performance.Three weeks ago, Jason Stowe, CEO of Cycle Computing, spoke with HPC in the Cloud about their work in renting large clusters of Amazon HPC instances for companies looking for a short but powerful burst of that parallelized computing power. The focus was on how they aided Schrodinger in winning a Bio-IT Best Practices award with their intensive yet relatively inexpensive protein calculations.Read full article »

 

The Cloud’s the Limit: Rentable Supercomputers for Improving Drug Discovery
JULY 11, 2013Bio-IT WorldCreating a computer program that accurately tells pharmaceutical companies which candidate drugs they should spend millions of dollars developing may seem like a daunting task, but Schrodinger, a software company that specializes in life science applications, hopes to do just that.“Our mission is to advance computational drug design to the point of becoming a true enabling technology,” said Alessandro Monge, Schrodinger’s VP of Strategic Business.Schrodinger won the Bio-IT World Best Practice Award for IT Infrastructure at the Bio-IT World Expo this past April for a drug discovery project they ran in collaboration with Cycle Computing that harnessed the power of cloud-based computing, a tool that allows companies to rent high performance computing hardware.Read full article »

 

Big data spurring HPC, co-processor workloads
JUNE 17, 2013Virtualization ReviewHigh performance computing systems are increasingly using co-processor systems with Intel and Nvidia seen as a key tag team for big data workloads, according to IDC.In a study detailing high performance computing (HPC) sites, IDC looked at 905 systems. In 2011, IDC profiled 488 HPC systems. The two year jump largely highlights how 67 percent of HPC sites are now focused on big data workloads, said IDC.The study from IDC corresponds with the latest top 500 supercomputer ranking.Read full article »
Cycle Computing CEO to Speak on Utility HPC at Cloud Slam and LiveStream Event
JUNE 12, 2013HPCWireJune 12 — Cycle Computing, the leader in utility high performance computing (HPC) software, announced today that CEO Jason Stowe will speak at Cloud Slam ‘13. On Tuesday, June 18 at 10:45am-11:15am PST, Stowe will present in person at the conference on the benefits of accessible compute power and the implications for science. Stowe’s talk will include a number of HPC case studies in Life Sciences, in such areas as cancer drug research and stem cell indexing.In addition to his presentation on accessible compute, Stowe will give an online talk on Tuesday, June 18 at 4:40pm-5:00pm PST, focused on large-scale HPC workloads on Intel Xeon processors in the cloud. Stowe will discuss case studies across verticals such as Life Sciences, Financial Services and Manufacturing. This presentation is sponsored by Intel Healthcare and will be available live online via LiveStream. Interested attendees can visit the CloudSlam LiveStream channel.Read full article »

 

Cycle Computing to Speak on Utility HPC at AWS Summit Tokyo

What: Cycle Computing, the leader in utility high performance computing (HPC) software, and AWS Advanced Technology Partner, announced today that CEO Jason Stowe will speak at the AWS Summit Tokyo 2013. Stowe will be a featured guest speaker during Amazon CTO Werner Vogels’ opening keynote on Wednesday, June 5. During his talk, Stowe will discuss the success Cycle’s customers have had leveraging Cycle software and Amazon’s EC2 to run large scale, complex HPC workloads in such areas as Drug Discovery, Manufacturing and Genomics.

In addition to his talk during Vogels’ keynote, Stowe will give a session during the AWS Partner Briefing on Tuesday, June 4. Stowe will discuss how the ability to orchestrate Utility HPC and data access creates new opportunities for AWS Partners to grow their business in a variety of vertical markets.

The AWS Summit Tokyo features over 63 sessions focused on the cloud. Over two days attendees will hear from over 20 companies with relevant use cases focused on the latest technology trends in cloud computing.

Both of Stowe’s talks will be translated to Japanese.

When: Wednesday June 5, 2013 – Thursday June 6, 2013

Where: Grand Prince Hotel New Takanawa, Tokyo

Who: Cycle Computing CEO, Jason Stowe, will discuss HPC cloud based use cases and how using the cloud has made impossible science possible. Stowe will share his thoughts on the future of cloud computing and the democratization of compute power. In addition, Stowe will discuss how its customers are leveraging Cycle’s new data management product, DataManager, to schedule and manage the secure transfer and storage of data sets needed for large scale computations.

To schedule a briefing with Jason Stowe at the event, contact Shaina Mardinly a 212-255-0080 ext. 15 or [email protected].

About Cycle Computing

Cycle Computing is the leader in Utility HPC software. As a self-funded, profitable software company, Cycle makes award-winning products that accelerate breakthroughs at any scale. From 50 to 50,000+ cores against up to 100s of TBs of data, the world’s brightest minds rely on Cycle software to tackle their most challenging computational problems in less time, for less cost than ever before possible. Cycle software provides the single pane of glass from which customers and partners easily orchestrate complex workloads and data across a right-sized set of internal and external HPC resources. Cycle helps clients maximize existing infrastructure and speed computations on servers, VMs, and on-demand in the cloud, like the 10,000-core cluster for Genentech, the 30,000+ core cluster for a Top 5 Pharma, and the 50,000-core cluster for Schrödinger covered in Wired, The Register, BusinessWeek, Bio-IT World, and Forbes. Since 2005, starting with three initial Fortune 100 clients, Cycle has grown to deploy proven implementations at Fortune 500s, SMBs and government and academic institutions including JP Morgan Chase, Purdue University, Pfizer and Lockheed Martin.

# # #

Media Contact: Shaina Mardinly Articulate Communications Inc.[email protected] 212.255.0080, ext. 15

 

Schrödinger Named Bio-IT World Best Practices Grand Prize Winner

Big 5 Pharma Leverages Cycle Computing Software to Win Bio-IT World’s IT Infrastructure Grand Prize

New York – May 21, 2013 – Schrödinger, Inc., a scientific leader in chemical simulation for pharmaceutical and biotechnology research, was named the IT infrastructure grand prize winner of Bio-IT World’s best practices award for a 50,000-core utility supercomputer orchestrated by Cycle Computing, leader in Utility HPC software. Conducted in the Amazon Web Services (AWS) cloud, the environment was created to accelerate the screening of potential new cancer drugs.

Schrödinger’s researchers used Cycle’s HPC software to orchestrate the cloud computing resources needed to complete more than 4,480 days of work, nearing 12.5 years of computations, in less than three hours. The project cost less than $4,828 per hour at peak and required no upfront capital. Schrödinger had previously been conducting coarser screens due to the constraints of their internal infrastructure. In contrast, access to large scale yet cost effective computing made it possible to conduct much more granular screens on a significantly larger number of compounds. This approach identified many compounds that were good potential drug candidates that would have otherwise not been discovered.

“We’re honored that our project was recognized by the Bio-IT World judges,” said Dr. Alessandro Monge, Schrödinger’s VP of strategic business. “With the level of sophisticated technology that Cycle provided us, we have significantly eliminated false negatives and false positives that delay drug discovery. The same calculation would’ve been cost prohibitive on our own infrastructure.”

“Our work with Schrödinger demonstrates how scientists can take advantage of innovative technology to complete better research faster and for exponentially less cost,” said Jason Stowe, founder and CEO, Cycle Computing. “We’re thrilled to create HPC environments to empower Schrödinger’s drug discovery breakthroughs and are honored by Bio-IT World’s recognition of their efforts.”

“We extend our sincere congratulations to the winners of this year’s Bio-IT World Best Practices Awards competition,” said Kevin Davies, editor of Bio-IT World. “Our select judges enjoyed evaluating the dozens of excellent entries received this year, and believe that the contest has highlighted some truly innovative, game-changing tools and solutions. Our winners should be very proud that they have captured the imagination and respect of such a distinguished jury.”

About Schrödinger

Schrödinger makes significant investments in R&D, which has led to major advances in the field of computational chemistry; it has achieved breakthroughs in quantum chemistry, molecular modeling, force fields, molecular dynamics, protein structure determination, scoring, and virtual screening. The company’s full product offerings range from general molecular modeling programs to a comprehensive suite of drug design software. Besides the company’s industry-leading drug discovery solutions, Schrödinger is actively developing state-of-the art simulation tools for materials research as well as enterprise software that can be deployed throughout an entire research organization. Schrödinger’s methods development and applications papers have thousands of citations and are often among the most-cited scientific publications. Schrödinger’s science is continually validated internally and by its users worldwide. Founded in 1990, Schrödinger has operations in the United States as well as in Europe, India, and Japan.

About Cycle Computing

Cycle Computing is the leader in Utility HPC software. As a self-funded, profitable software company, Cycle makes award-winning products that accelerate breakthroughs at any scale. From 50 to 50,000+ cores against up to 100s of TBs of data, the world’s brightest minds rely on Cycle software to tackle their most challenging computational problems in less time, for less cost than ever before possible. Cycle software provides the single pane of glass from which customers and partners easily orchestrate complex workloads and data across a right-sized set of internal and external HPC resources. Cycle helps clients maximize existing infrastructure and speed computations on servers, VMs, and on-demand in the cloud, like the 10,000-core cluster for Genentech, the 30,000+ core cluster for a Top 5 Pharma, and the 50,000-core cluster for Schrödinger covered in Wired, TheRegister, BusinessWeek, Bio-IT World, and Forbes. Since 2005, starting with three initial Fortune 100 clients, Cycle has grown to deploy proven implementations at Fortune 500s, SMBs and government and academic institutions including JP Morgan Chase, Purdue University, Pfizer and Lockheed Martin.

About Bio-IT World

Part of the Cambridge Healthtech Institute Media Group, Bio-IT World provides outstanding coverage of cutting-edge trends and technologies that impact the management and analysis of life sciences data, including next-generation sequencing, drug discovery, predictive and systems biology, informatics tools, clinical trials, and personalized medicine. Through a variety of sources including, Bio-ITWorld.com, the Weekly Update Newsletter and the Bio-IT World News Bulletins, Bio-IT World is a leading source of news and opinion on technology and strategic innovation in the life sciences, including drug discovery, development.

 

# # #

‘No IT’ technologies empower business users, improve productivity
APRIL 25, 2013TechTargetIn the never-ending quest to cut corporate costs, the benefits of “no IT” technologies are becoming more sought-after than ever before.Cloud computing and consumerization tools are empowering business users to be their own IT guys, so to speak, thereby improving workplace productivity through better content management and more efficient use of computing resources — to the benefit of both users and IT.Software vendor SimplyBox has done that with its inContext apps (no relation to the name of this column). The apps, which SimplyBox calls “fragments,” are a kind of mashup for bridging applications that could work better together — for instance, LinkedIn, Salesforce.com, Twitter and Gmail. “This is not about integration,” said co-founder and CEO Mario Cavagnari. “It’s not about … rigid approaches, moving data, synchronizing data, mapping users and all of that. We don’t do any of that. What we do is allow people to get the information they need in the context they need without having to move data.”A case in point: SimplyBox’s Salesforce inContext for LinkedIn software enables sales and marketing users to set up the bridge themselves and work between the two applications seamlessly — without requiring IT’s help.Read full article »
IBM’s potential x86 server sale to Lenovo highlights oncoming train
APRIL 19, 2013ZDNetIBM is reportedly talking to Lenovo about selling its x86-based server business to Lenovo and the move would make a lot of sense.If the talks, flagged in the Wall Street Journal and CRN, sound familiar that’s because Big Blue famously unloaded its PC business to Lenovo in a win-win deal. Lenovo went on to be one of the premier PC makers and IBM focused on software and services and got ahead of trends such as analytics.To say the IBM’s PC situation then and today’s server state of affairs rhyme would be an understatement. You could argue the situations are the same thing. When IBM offloaded its PC unit, no one saw tablets coming. All IBM knew is that the margins stunk and it wanted higher value wares. The post-PC era was years away.Read full article »
Amazon Now Storing 2 Trillion Objects in S3
APRIL 25, 2013Virtualization ReviewIn the latest sign that Amazon’s enterprise cloud business remains the envy of every other service provider, the amount of data stored in Amazon Web Services (AWS) Simple Storage Service, or S3, is now 2 trillion.To put that in context, it’s double the amount of information stored in S3 since last June, when AWS hit the 1 trillion object milestone.Amazon CTO Werner Vogels revealed the latest stat at the kickoff of his company’s first AWS Summit, a 13-city roadshow which commenced in New York last week. While Amazon doesn’t break out revenues for its AWS business, revenues categorized as “other” jumped 60 percent from $500 million to $798 million in the first quarter year-over-year, the company reported after the markets closed today. It’s widely presumed that the “other” revenues described by Amazon primarily come from AWS underscoring the rapid growth for the business.Read full article »
The Chef Feeding Facebook’s Infrastructure
MARCH 12, 2013Datacenter DynamicsJesse Robins talks really fast. It is the pace of someone that is very excited. And when you first meet him it doesn’t take long to realize why.Robbins is a co-founder of Opscode, a company born out of a consultancy building fully automated infrastructure for startups. Robbins came from amazon.com where he was responsible for website availability. Adam Jacobs, another founder, had been building new infrastructures for startups and had worked as a systems administrator and architect and Barry Steinglass, the third party behind Opscode, was an early member of the Xbox platform team. The final founder, Nathan Haneysmith, used to be the Linux Platform lead for IBM e-Business Web hosting.The company they founded now has 400 paying customers ranging from Brightcove, DreamHost and Splunk who use the hosted and private offering of Chef – Opscode’s infrastructure automation tool. It also has more than 800 “cook books”, shared recipes for code released by its open source community, the users of which number in their thousands. And just as I talked to Robbins, Opscode had secured one of the biggest technology wins a tech company today could ask for – Facebook. Even more exciting, the social media giant is actually paying for Chef, which found its roots in open source, taking on services and cementing its value above the open source offering.Read full article »

 

Cloud Computing: Where Are We Now?
MARCH 6, 2013Inc’s Productivity@Work NewsletterTAs is often the case in technology development, the advantages cloud computing provides to larger organizations are beginning to trickle down to smaller ones, creating a world of new opportunities for small and medium-sized businesses. Cloud-computing solutions offer an inexpensive alternative for SMBs looking to save money (potentially, a very sizable amount) on their IT costs, and the new technology can help them level the playing field with larger competitors.“IT begins and ends with increased efficiency and cost savings,” says Jim Darragh, CEO of Abiquo, a provider of advanced enterprise cloud software solutions. IT departments are working to find solutions to answer user demand, and that is especially true of SMBs, which typically have smaller budgets and must respond appropriately to both employee and market demand in order to survive. “So if SMBs can adapt to the cloud successfully, they are removing manual processes and installing automated, or at least very-easy-to-use processes, and that’s a recipe for increased productivity and cost savings,” he says.Read full article »

 

Cycle Computing Introduces DataManager™ to Lower Storage Costs and Ease Big Data Access for HPC Workloads

Utility HPC Software Provider Automates Secure, Large Scale Data Transfers to and from Amazon Glacier On-demand

New York – April 18, 2013 – Cycle Computing, the leader in utility HPC software, today announced the release of DataManagerTM, a new solution that schedules and manages the secure transfer and storage of data sets needed for large scale computations. DataManager seamlessly automates data archival and retrieval from lower cost cloud storage solutions, such as Amazon Glacier. This enables users to more quickly and cost-effectively conduct a wide range of compute and data intensive workloads in areas such as life sciences, financial services, manufacturing, academia and energy.

HPC workloads from molecular modeling to risk simulation require analysis of increasingly large volumes of data. Access to affordable compute has driven down the cost of generating this data, faster than the cost to store it. As a result, organizations want to reliably and securely leverage internal and cloud-based storage solutions to ensure their users have local access to the data needed to run these complex workloads.

“We introduced DataManager to meet the growing need for scientists to more efficiently manage large amounts of data to foster new discoveries in their fields,” said Jason Stowe, CEO of Cycle Computing. “Researchers now have the ability to control the transfer of large computation result, to and from various internal and cloud-based storage. With DataManager, intelligent scheduling and automation can keep data in the right place at the right time for reference and future research.”

Cycle uses DataManager to automatically move data, using a variety of open source and 3rd party transfer protocols and appliances, to utility HPC environments like the recently announced utility HPC cluster for a Top 10 Pharma. This 10,600 server utility supercomputer was created in 2 hours, ran 39 years of computing on Amazon EC2 in 9 hours, for $4,372, using one Opscode Chef server to automate configuration management. “DataManager does for automating data scheduling at scale what Opscode Chef does for configuration management in utility HPC clusters,” added Mr. Stowe.

Key benefits of DataManager include:

• Data elasticity: Efficiently move data from endpoint to endpoint, such as a local file system to Glacier or a remote file system, based on the specific date and time it is needed

• Data awareness and usage chargeback: Usage statistics enable accurate accounting and chargeback and user awareness of cost savings

• Client-side security and encryption: Locally control encryption and key management for data in transit and at rest in Amazon Glacier

• Lower latency and cost: Make cost-effective cloud archival more readily accessible to free up “hot” storage and avoid the need to buy additional on-premise devices

DataManager is now available for general use after being successfully tested and used in beta by two leading pharmaceutical companies and two of the top five life insurance organizations.

# # #

Utility Supercomputing Heats Up
FEBRUARY 28, 2013HPCwireThe HPC in the cloud space continues to evolve and one of the companies leading that charge is Cycle Computing. The utility supercomputing vendor recently reported a record-breaking 2012, punctuated by several impressive big science endeavors. One of Cycle’s most significant projects was the creation of a 50,000-core utility supercomputer inside the Amazon Elastic Compute Cloud.Built for pharmaceutical companies Schrödinger and Nimbus Discovery, the virtual mega-cluster was able to analyze 21 million drug compounds in just 3 hours for less than $4,900 per hour. The accomplishment caught the attention of IDC analysts Chirag Dekate and Steve Conway, who elected to honor Cycle with their firm’s HPC Innovation Excellence Award.Research Manager of IDC’s High-Performance Systems Chirag Dekate explained the award recognizes those who have best applied HPC in the ecosystem to solve critical problems. More specifically, IDC is looking for scientific achievement, ROI, and a combination of these two elements.Cuff spoke to Bio-IT World editor Kevin Davies and shared his views about big data, cloud computing, and the future of research computing.Read full article »
Cycle Computing CTO James Cuff on Clouds, On-Demand Computing and Package Holidays
FEBRUARY 6, 2013Bio-IT WorldTechnology Officer at Cycle Computing, James Cuff, spent the past seven years as Director of Research Computing and Chief Technology Architect for Harvard University’s Faculty of Arts and Sciences. His team worked “at the interface of science and advanced computing technologies,” providing a breadth of high-performance computing, storage and software expertise, all the while striving to manage a monstrous surge in data. Cuff previously led the construction of the Ensembl project at the Wellcome Trust Sanger Institute, before moving to the U.S., where he managed production systems at the Broad Institute, while his wife, fellow Brit Michelle Clamp, joined the lab of Broad director Eric Lander.In his new position, Cuff aims to apply some of his insights and ideas to an even bigger canvas. Cycle has made headlines over the past 2-3 years by spinning up virtual supercomputers for academic and industry clients, as well as creating the Grand Science Challenge, donating more than $10,000 in cloud compute time. CEO Jason Stowe says Cuff brings a wealth of knowledge and contacts, and could bring some managerial discipline to Cycle’s patent portfolio. He adds that Cuff will remain in the Boston/Cambridge area, which could impact Cycle’s local presence down the road. (Meanwhile Clamp, who moved to Harvard from the BioTeam last year, will fill Cuff’s shoes on an interim basis while the search for his replacement continues.)Cuff spoke to Bio-IT World editor Kevin Davies and shared his views about big data, cloud computing, and the future of research computing.Read full article »
Cycle Computing Appoints New Chief Technology Officer
FEBRUARY 6, 2013PRWebCycle Computing, the leader in utility supercomputing software, today announced they have appointed James Cuff as chief technology officer. Cuff will join Cycle’s senior management team to help customers achieve the full benefits of, and oversee product strategy for, Cycle’s growing portfolio of software and technology, as well as advance communication when new technology is introduced. As CTO, Cuff will further Cycle’s goal of making high performance computing (HPC) in the cloud accessible across industries by utilizing the organization’s award-winning HPC software.“Cycle’s technology has evolved tremendously over the past several years to help make large amounts of compute accessible to scientists, engineers, manufacturers and anyone looking for fast and efficient compute power,” said Jason Stowe, CEO of Cycle Computing. “Adding someone of James’ caliber to our leadership team will allow us to bring our innovative HPC solutions to market and create an accessible supercomputing offering for everyone in need.”Read full article »
Democratisation of Cloud Computing
JANUARY 2013Cloud Computing IntelligenceThe most transformative technology trend of this century is increased access to computing. Whether it’s the sequencing of the first human genome, the finding of the “God Particle” that give matter mass earlier is 2012, or the correct prediction of the path for Hurricane Sandy, there are many examples of the impact that computing can have on humanity, and our understanding of the world. Jason Stowe investigatesOnce an exclusive service reserved for the giants in the space, high performance computing (HPC) is now accessible to nearly everyone. The recent democratization of compute, coupled with the fundamental belief that all of humanity’s scientific, engineering and technical problems are solvable with enough access to compute power, has huge implications for not only this industry but humanity in general. If compute power is what it takes to move the needle for such areas as energy, life sciences and rick management, and it is more readily available, the world will be in a much better place.Read full article »
IDC Awards Cycle Computing HPC Innovation Excellence Award
JANUARY 16, 2013HPC WireNEW YORK, Jan. 16 – Cycle Computing, the leader in utility supercomputing software, today announced it has ended its record-breaking 2012 by winning the IDC HPC Innovation Excellence Award. IDC recognized Cycle’s 50,000-core utility supercomputer run in the Amazon Web Services (AWS) cloud for pharmaceutical companies Schrödinger and Nimbus Discovery. The unprecedented cluster completed 12.5 processor years in less than three hours with a cost of less than $4,900 per hour to facilitate computational drug discovery, and was recognized by IDC for its impressive return on investment.The award capped a year of dramatic client growth and utility supercomputing accomplishments for Cycle, who recorded an 85% growth in new clients, as compared to 80% in 2011. Building off its storied success in the life sciences sector throughout 2012, the company has increased its sales and support staff and has expanded across markets, including energy, manufacturing, academic & government research, and financial services.Read full article »
High-Performance Computing Takes to the Cloud
DECEMBER 13, 2012The Wall Street JournalMost companies that need this sort of performance obtain it from their own computers and data centers. But much of that capacity, which is designed for peak loads, goes unused a good percentage of the time, prompting a small but growing number of users to buy that computational power from cloud companies that deliver it over the Internet, as a service.”The global market for high performance computing is estimated at $10 billion for 2012, according to Addison Snell, CEO of Intersect360 Research. It’s still too early to estimate the market value of the cloud-based niche of the total market. But Snell says that 1% to 1.5% of all the work that is done on high-performance clusters is done via the cloud. And he estimates that 25% of all high-performance computer users say they have at least given the cloud a try for some of those operations.The work can be billed by the hour, which can be a much more efficient use of money and resources for companies like Pacific Life Insurance, Novartis, The Hartford Financial Services Group and Genentech, all of which have used Amazon Web Services’ high-performance computing clusters for financial models and to crunch scientific data. They tap into the AWS cloud via a third party vendor, Cycle Computing. The Hartford chose to work with Cycle because the company had a demand for larger simulations but it also needed to reduce IT costs, said assistant vice president Robert Nordlund, during a recent conference.Read full article »
What’s Amazon’s enterprise strategy for the cloud?
DECEMBER 4, 2012Network WorldCEO Jason Stowe helps companies use AWS resources for high-performance computing needs. Cycle works with a majority of the top 20 pharmaceutical companies, he says, including Novartis, which ran a 30,000-core workload over 95,000 compute hours in AWS’s cloud to run analysis of drug tests. It’s a game-changer for the enterprise, he contends.”This is like moving from the horse and buggy to the automobile,” he says. As Cycle Computing ramps up, he says large-scale workloads of 10,000 to 20,000 cores have becoming “pedestrian” for Cycle to manage and AWS to handle. Robert Half International, The Hartford and Pacific Life all discussed at Amazon’s conference ways in which they’re doing HPC in AWS’s cloud.Read full article »
Why Amazon thinks big data was made for the cloud
NOVEMBER 30, 2012GigaOMFor Amazon Web Services Chief Data Scientist Matt Wood, the day isn’t filled performing data alchemy on behalf of his employer; he’s entertaining its customers. Wood helps AWS users build big data architectures that use the company’s cloud computing resources, and then take what he learns about those users’ needs and turn them into products — such as the Data Pipeline Service and Redshift data warehouse AWS announced this week.He and I sat down this week at AWS’s inaugural Re: Invent conference and talked about many things, including what he’s seen in the field and where cloud-based big data efforts are headed. Here are the highlights.Read full article »
Ahead by a Century: Utility Supercomputing Advances Stem Cell Research
OCTOBER 8, 2012HPC in the CloudThe use of the term “computer” to mean “calculating machine” dates back to 1897, according to The Oxford English Dictionary, Second Edition. One-hundred and fifteen years later, we’re on the verge of not only exascale calculating machines, but a new era in health care: personalized medicine. This emerging field in which health care decisions and practices are customized to the individual patient using genetic information rests on decades of scientific achievement. And just as advances in digital technology continue to bring HPC into the mainstream, advances in computer science and genomics are democratizing medical care.One of the key enablers behind both of these trends is cloud computing, a way of delivering computing that relies on economies of scale. Making supercomputing accessible to a new class of user is the purview of utility supercomputing vendor Cycle Computing. In the weeks running up to SC11, Cycle CEO Jason Stowe introduced the Big Science Challenge to demonstrate the capabilities of on-demand supercomputing. What if researchers could have access to virtually unlimited resources, Stowe asked, what kinds of big science questions could they answer?Read full article »
ShareShare on FacebookShare on Google+Share on LinkedInTweet about this on TwitterEmail this to someone