Tagged: performance Toggle Comment Threads | Keyboard Shortcuts

  • fala13 17:59 on 02/06/2014 Permalink | Reply
    Tags: , compilation, gcc, performance   

    Bad performance after switching to C++11?! 

    Recently tool chain in my company (which is called Nokia now – awesome, right?) was update from GCC 4.2 to GCC 4.7. Everybody in my project was very excited – our software will be faster thanks to the move concept and just from standing next to the new compiler! And developers will be able to write lambdas that no one else will be able to decode!

    When first performance test results came in I was puzzled – improvement was expected and we got 5-10% performance decrease? I had to ask my overly enthusiastic developers to wait with enriching their code with new shenanigans and I’ve put my detective hat on.
    rorschach1
    I’ve discovered that the new compiler is not to blame, only adding -std=c++11 makes the difference and shortly after Stackoverflow came with help: http://stackoverflow.com/questions/20977741/stdvector-performance-regression-when-enabling-c11
    Sufficient to say, default settings of GCC don’t trigger proper inlining of container methods of the STL’s C++11 version. When we increased the -finline-limit the performance got back to level without the -std=c++11 flag. But not a bit higher.

    Background: Our project does lot of message handling, uses STL quite heavily and is around 1 milion LOC. We get best results with -O2, yes, even better than with -O3. I have tested various flag combinations and nothing matches regular -O2. But that is for our application and our fast path use pattern. For yours you should always test, test and test for yourself (and investigate!).

    Advertisements
     
  • fala13 21:50 on 27/03/2014 Permalink | Reply
    Tags: cloud, , performance   

    War in the skies and some other rumors from the performance engineers world 

    ICPE2014
    I’ve just returned from ICPE2014 conference taking place in Dublin. Apart from lots of cloud, VM and power scaling related talks there were many that applied to what I do – building and maintaining SW performance models, creating and working with load tests and lower level stuff like niceness in Linux and CPU cache.

    Some fancy facts and rumors I’ve picked up:

    • Amazon estimates that 100ms in latency on their pages costs 1% in its sales.
    • Google says 0.5s lag results in 20% drop in their traffic.
    • Bank of America had 2 performance engineers earning 1 million $ a year. New manager came to cut costs and fired them. After a year the performance dropped so badly that Bank had to buy additional servers for 15 million $. Unfortunately after another year it was not enough to support the load and other expenses (and sacking of the said manager) had to follow.
    • One mentioned bottleneck bugs during conference was bad implementation of visitor pattern (we also had issues with those).
    • Lots of attention given to optimal cache utilization (e.g. use structure of arrays rather than arrays of structures).
    • In 2012 already data centers used power equivalent to that of 30 nuclear power plants.
    • Dark shipping – a technique to test your new SW by transparently routing request in parallel to your old and new SW. Customer only gets results from old SW but you observe how new SW works and if it provides same output.
    • New CPU accelerator Intel Xeon Phi (60 CPU ~= 1 TFLOP) still slower than NVIDIA CUDA Kepler GPU due to different cache memory structure – you really need to put a lot of effort in parallelizing your program (like manual vectorization) to utilize such accelerators.

    All in all it seems it pays off to be a cloud expert those days as there is a kind of arms race and cloud war going on with Google aggresively dropping its prices. During the conference there was a lot of talks on scaling of the cloud and its power efficiency so there is definitely something in the air ;).

    Some photos from my stay in Dublin:

     
  • fala13 10:44 on 03/11/2013 Permalink | Reply
    Tags: agile, , , Neil Gunther, performance,   

    Obama's health website is a performance failure 

    KimJongBansInternet
    One of the biggest US computer software projects had just gone bust on our eyes. The site responsiveness was so low that ‘virtual waiting rooms’ had to be implemented. It’s fun to watch, especially if you have been struggling with similar quality, management and performance issues in your own career.

    Check out the summary of articles on Neil Gunther’s (performance engineering guru) blog: http://perfdynamics.blogspot.com/2013/10/what-happened-at-healthcaregov.html

    If you think this is TLDR (too long, didn’t read) then let me offer you a short wrap up (should sound familiar if you worked in any software business):

    • Requirements were supplied late and changed during implementation
    • Unrealistic deadlines
    • Agile methodologies were used by companies
    • After the initial failure tons of people and consultants where thrown in to ‘fix’ it (including the infamous Booz Allen Hamilton – Edward Snowden employers)
    • President had to publicly apologize for the broken web site

    If you are interested how it looks from software infrastructure perspective take a look at this nice graphic summing it up: http://www.thedoctorweighsin.com/healthcare-dot-gov-how-does-it-work-infographic/

    And to finish with the NYT citation: “Indeed, according to the research firm the Standish Group, 94 percent of large federal information technology projects over the past 10 years were unsuccessful — more than half were delayed, over budget, or didn’t meet user expectations, and 41.4 percent failed completely.”

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel