The standards committee adopted "P0811: Well-behaved interpolation for numbers and pointers" for C++20.
It includes a new library call `std::midpoint`.
The paper says "The simple problem of computing a value between two other values is surprisingly subtle in general."
In this talk, I will explore this simple call, provide a history of the development in libc++, and show some of the pitfalls.
Undefined behavior will rear its ugly head, along with numeric representations, and the arcane C promotion rules.
Along the way, we'll talk about testing, and why writing extensive tests helps everyone.
Fuzzing is an automated testing technique which repeatedly generates input data, feeds it into a program and monitors the resulting behaviour. It is generally associated with improving the robustness and security of software, but it can be useful in other contexts.
Modules are coming to C++ in the next standard, and they will have an impact unlike any other new feature added to C++ since 1998. Modules are a first-class system for encapsulating and defining interfaces to translation units. They are a modern replacement for C++'s textual inclusion system (e.g. headers).
In the decade since C++11 shipped, the hardware landscape has changed drastically. 10 years ago, we were still in the early stages of the concurrent processing revolution; 2 to 4 hardware threads were common and more than 10 was "many". Our tolerance for synchronization latency was greater; we were willing to pay microseconds and milliseconds.
Many types in the standard library have additional meta-data
defined in them. For example, all container types have
a value_type defined which tells us the type of the items
that are stored in the collection. Iterators have tags
that tell us to which kind the iterator belongs to.
By now you probably heard about “Regular Types and Why Do I Care” :)
"If you don't benchmark your code, you don't care about its performance." Chandler Carruth
Benchmarking our code matters when we care about its speed. While it seems simple to use Google Benchmark and run a benchmark, there are many pitfalls and getting accurate results from a benchmark isn't an easy task. What to measure first? How comparable are the first and the thousandth iterations of our benchmark? What if we have to measure the speed of only one line of code? What are the limits of our measurements?
The talk will take an in-depth look at instructions optimization, and explain how to achieve accurate measurements. The talk will be rather low-level - looking at assembly, having hardware in mind - but applied to modern C++, and will describe several tools and libraries that C++ developers can use to make various measurements on their code.
Value-oriented design reconciles functional and procedural programming by focusing on value semantics. Like functional programming, it promotes local reasoning and composition—it is however pragmatic and can be implemented in idiomatic C++. In previous talks, I have discussed how immutable data-structures help use value semantics at scale, and how the Unidirectional Data-flow Architecture (Redux, Elm) provide a solid foundation for designing interactive software based on values and functions.
The ability of creating costless abstractions has been one of the most important principles of C++ since its inception. Ideally, such abstractions allow for better complexity management without the often associated runtime overhead.
Simple, yet very effective coding habits can make significant differences in productivity and quality during development. Since static code analyzers are becoming increasingly clever, they are particularly well placed to point out certain classes - think performance, maintainability, bug-prevention and features supported by a certain standard - of such habits (e.g. move semantics). Their remarks, often considered to be "low-hanging fruit", are often broadly applicable, supported by the community and often not considered to be premature optimizations, making them perfect to be stored in muscle-memory.
There are already plenty of programming languages compiled to and executed on virtual machines. C++ is a systems programming language, focused on control, speed and efficiency. Therefore compiling C++ to the bytecode and executing it on virtual machine seems to be counter intuitive. However, in the context of web technologies, this is a very powerful concept and opens new opportunities for the language.
Good documentation helps to understand a project easily. Besides textual descriptions and pure API listings, diagrams are a great possibility to provide overviews and to visualize different abstraction levels. However, it is challenging is to keep such visualizations up-to-date with the code.
In this talk, I will present some approaches of how to generate high-level visualizations to simplify the understanding of C++ projects.
Examples show how to create such diagrams and improve their consistency by integrating them closely to your source code revision system.
We all like to complain about the lack of some nifty features in C++ or the absence of "obvious" library functions. At the same time we lament the overwhelming complexity of the language. Torn between those two forces we often forget that the evolution of C++ is inherently bound to its core and decisions that were made decades ago.
Drawing an image or displaying text on the screen is accomplished by a process called rendering. Typically a text based application renders using the CPU while 3D games make heavy use of GPU rendering. It would be ideal if applications were able to render graphics and text using hardware acceleration on the GPU. Is this practical, feasible, or realistic?
Modern CPU microarchitectures are extremely complex and their behaviour can sometimes cause performance degradation that cannot be explained by looking at the source code alone. In certain cases we have to look at the microarchitecture details under the hood to understand what's really going on. This presentation will attempt to explain and demonstrate several effects caused by design trade-offs in today's CPUs and also show some tools that can be used to detect and measure them....