[cctbxbb] Install OpenBLAS or MKL in base

Luc Bourhis luc_j_bourhis at mac.com
Wed Jun 20 05:13:21 PDT 2018


years ago, I had wrote on this discussion list about a project of mine to dramatically accelerate scitbx.lstbx by using optimised BLAS libraries. That work had been dormant on a branch but now we are planning to move forward with the help of Pascal Parlois who will do the actual coding in the coming months. In order for this new code to be exercised by nightly tests, we, the smtbx people, need to write code so that that optimised BLAS library is installed during bootstrap. Right now, as a stopgap, I used conda to install either OpenBLAS and MKL but that won’t do with the cctbx philosophy.

There are basically two choices: either OpenBLAS or MKL. When I started this project, I chose OpenBLAS because MKL was not freely redistributable, and I had got the green light to install it as needed. However now MKL has become completely free, thus becoming a possible choice. MKL is more performant on Intel but OpenBLAS is better on AMD (OpenBLAS actually runs on pretty much any processor out there but we don’t care in the context of cctbx): c.f. Julia people conclusions <https://discourse.julialang.org/t/openblas-is-faster-than-intel-mkl-on-amd-hardware-ryzen/8033>  Now I realise that most of you may not care one way or another!?! But we won’t move forward before we got the green light…

Note that I am taking about installing a BLAS usable from C++ here, not getting a BLAS-accelerated numpy. The latter could be an alternative though. But that would require to modify the bootstrap code to compile a MKL- or OpenBLAS-enabled numpy anyway.

Best wishes,

Luc J Bourhis

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://phenix-online.org/pipermail/cctbxbb/attachments/20180620/0e3bc8c1/attachment.htm>

More information about the cctbxbb mailing list