nhaliday + oly-programming   34

From Java 8 to Java 11 - Quick Guide - Codete blog
jshell, Optional methods, var (type inference), {List,Set,Map}.copyOf, `java $PROGRAM.java` execution
programming  cheatsheet  reference  comparison  jvm  pls  oly-programming  gotchas  summary  flux-stasis  marginal  org:com 
16 days ago by nhaliday
c++ - Which is faster: Stack allocation or Heap allocation - Stack Overflow
On my machine, using g++ 3.4.4 on Windows, I get "0 clock ticks" for both stack and heap allocation for anything less than 100000 allocations, and even then I get "0 clock ticks" for stack allocation and "15 clock ticks" for heap allocation. When I measure 10,000,000 allocations, stack allocation takes 31 clock ticks and heap allocation takes 1562 clock ticks.

so maybe around 100x difference? what does that work out to in terms of total workload?

Recent work shows that dynamic memory allocation consumes nearly 7% of all cycles in Google datacenters.

That's not too bad actually. Seems like I shouldn't worry about shifting from heap to stack/globals unless profiling says it's important, particularly for non-oly stuff.

edit: Actually, factor x100 for 7% is pretty high, could be increase constant factor by almost an order of magnitude.
q-n-a  stackex  programming  c(pp)  systems  memory-management  performance  intricacy  comparison  benchmarks  data  objektbuch  empirical  google  papers  nibble  time  measure  pro-rata  distribution  multi  pdf  oly-programming  computer-memory 
21 days ago by nhaliday
LeetCode - The World's Leading Online Programming Learning Platform
very much targeted toward interview prep
This data is especially valuable because you get to know a company's interview style beforehand. For example, most questions that appeared in Facebook interviews have short solution typically not more than 30 lines of code. Their interview process focus on your ability to write clean, concise code. On the other hand, Google style interviews lean more on the analytical side and is algorithmic heavy, typically with multiple solutions to a question - each with a different run time complexity.
programming  tech  career  working-stiff  recruiting  interview-prep  algorithms  problem-solving  oly-programming  multi  q-n-a  qra  comparison  stylized-facts  facebook  google  cost-benefit  homo-hetero 
24 days ago by nhaliday
What every computer scientist should know about floating-point arithmetic
Floating-point arithmetic is considered as esoteric subject by many people. This is rather surprising, because floating-point is ubiquitous in computer systems: Almost every language has a floating-point datatype; computers from PCs to supercomputers have floating-point accelerators; most compilers will be called upon to compile floating-point algorithms from time to time; and virtually every operating system must respond to floating-point exceptions such as overflow. This paper presents a tutorial on the aspects of floating-point that have a direct impact on designers of computer systems. It begins with background on floating-point representation and rounding error, continues with a discussion of the IEEE floating point standard, and concludes with examples of how computer system builders can better support floating point.

"you must use an epsilon when dealing with floats" is a knee-jerk reaction of programmers with a superficial understanding of floating-point computations, for comparisons in general (not only to zero).

This is usually unhelpful because it doesn't tell you how to minimize the propagation of rounding errors, it doesn't tell you how to avoid cancellation or absorption problems, and even when your problem is indeed related to the comparison of two floats, it doesn't tell you what value of epsilon is right for what you are doing.


Regarding the propagation of rounding errors, there exists specialized analyzers that can help you estimate it, because it is a tedious thing to do by hand.


This was part of HW1 of CS24:
In particular, simply summing n numbers in sequence has a worst-case error that grows proportional to n, and a root mean square error that grows as {\displaystyle {\sqrt {n}}} {\sqrt {n}} for random inputs (the roundoff errors form a random walk).[2] With compensated summation, the worst-case error bound is independent of n, so a large number of values can be summed with an error that only depends on the floating-point precision.[2]

In numerical analysis, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite-precision floating-point numbers that substantially reduces the accumulated round-off error compared to naively accumulating the sum in sequence.[1] Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation.

In particular, pairwise summation of a sequence of n numbers xn works by recursively breaking the sequence into two halves, summing each half, and adding the two sums: a divide and conquer algorithm. Its worst-case roundoff errors grow asymptotically as at most O(ε log n), where ε is the machine precision (assuming a fixed condition number, as discussed below).[1] In comparison, the naive technique of accumulating the sum in sequence (adding each xi one at a time for i = 1, ..., n) has roundoff errors that grow at worst as O(εn).[1] Kahan summation has a worst-case error of roughly O(ε), independent of n, but requires several times more arithmetic operations.[1] If the roundoff errors are random, and in particular have random signs, then they form a random walk and the error growth is reduced to an average of {\displaystyle O(\varepsilon {\sqrt {\log n}})} O(\varepsilon {\sqrt {\log n}}) for pairwise summation.[2]

A very similar recursive structure of summation is found in many fast Fourier transform (FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.[2][3]

However, these encouraging error-growth rates only apply if the trigonometric “twiddle” factors in the FFT algorithm are computed very accurately. Many FFT implementations, including FFTW and common manufacturer-optimized libraries, therefore use precomputed tables of twiddle factors calculated by means of standard library functions (which compute trigonometric constants to roughly machine precision). The other common method to compute twiddle factors is to use a trigonometric recurrence formula—this saves memory (and cache), but almost all recurrences have errors that grow as O(n‾√) , O(n) or even O(n2) which lead to corresponding errors in the FFT.


There are, in fact, trigonometric recurrences with the same logarithmic error growth as the FFT, but these seem more difficult to implement efficiently; they require that a table of Θ(logn) values be stored and updated as the recurrence progresses. Instead, in order to gain at least some of the benefits of a trigonometric recurrence (reduced memory pressure at the expense of more arithmetic), FFTW includes several ways to compute a much smaller twiddle table, from which the desired entries can be computed accurately on the fly using a bounded number (usually <3) of complex multiplications. For example, instead of a twiddle table with n entries ωkn , FFTW can use two tables with Θ(n‾√) entries each, so that ωkn is computed by multiplying an entry in one table (indexed with the low-order bits of k ) by an entry in the other table (indexed with the high-order bits of k ).

[ed.: Nicholas Higham's "Accuracy and Stability of Numerical Algorithms" seems like a good reference for this kind of analysis.]
nibble  pdf  papers  programming  systems  numerics  nitty-gritty  intricacy  approximation  accuracy  types  sci-comp  multi  q-n-a  stackex  hmm  oly-programming  accretion  formal-methods  yak-shaving  wiki  reference  algorithms  yoga  ground-up  divide-and-conquer  fourier  books  tidbits  chart  caltech  nostalgia 
7 weeks ago by nhaliday
c - What REALLY happens when you don't free after malloc? - Stack Overflow
keep this stuff in mind when writing competition stuff, can usually just omit deletes/frees unless you're really running up against the memory limit:
Just about every modern operating system will recover all the allocated memory space after a program exits.


On the other hand, the similar admonition to close your files on exit has a much more concrete result - if you don't, the data you wrote to them might not get flushed, or if they're a temp file, they might not get deleted when you're done. Also, database handles should have their transactions committed and then closed when you're done with them. Similarly, if you're using an object oriented language like C++ or Objective C, not freeing an object when you're done with it will mean the destructor will never get called, and any resources the class is responsible might not get cleaned up.


I really consider this answer wrong.One should always deallocate resources after one is done with them, be it file handles/memory/mutexs. By having that habit, one will not make that sort of mistake when building servers. Some servers are expected to run 24x7. In those cases, any leak of any sort means that your server will eventually run out of that resource and hang/crash in some way. A short utility program, ya a leak isn't that bad. Any server, any leak is death. Do yourself a favor. Clean up after yourself. It's a good habit.


Allocation Myth 4: Non-garbage-collected programs should always deallocate all memory they allocate.

The Truth: Omitted deallocations in frequently executed code cause growing leaks. They are rarely acceptable. but Programs that retain most allocated memory until program exit often perform better without any intervening deallocation. Malloc is much easier to implement if there is no free.

In most cases, deallocating memory just before program exit is pointless. The OS will reclaim it anyway. Free will touch and page in the dead objects; the OS won't.

Consequence: Be careful with "leak detectors" that count allocations. Some "leaks" are good!
q-n-a  stackex  programming  memory-management  performance  systems  c(pp)  oly-programming  computer-memory 
april 2019 by nhaliday
Olimpiada Informatyczna
old link seems broken?: http://main.edu.pl/en/archive/oi

Polish Informatics Olympiad, known for very difficult problems
database  problem-solving  puzzles  eastern-europe  oly  usaco-ioi  oly-programming  unit  multi  quixotic  nostalgia  cost-benefit  advanced 
april 2016 by nhaliday

bundles : frameproblem-solvingtechie

related tags

accretion  accuracy  advanced  advice  aggregator  algorithms  analysis  api  approximation  asia  benchmarks  best-practices  big-picture  blog  books  c(pp)  caching  caltech  canada  career  chart  cheatsheet  checking  checklists  community  comparison  computer-memory  concept  constraint-satisfaction  contrarianism  correctness  cost-benefit  critique  cs  data  data-structures  database  dataviz  debugging  devtools  discussion  distributed  distribution  divide-and-conquer  dotnet  DP  eastern-europe  ecosystem  empirical  engineering  examples  explanation  extratricky  extrema  facebook  flux-stasis  foreign-lang  formal-methods  forum  fourier  functional  golang  google  gotchas  graph-theory  graphs  greedy  ground-up  haskell  history  hmm  homo-hetero  howto  integration-extension  interview-prep  intricacy  japan  javascript  jobs  jvm  len:short  let-me-see  libraries  links  lisp  list  marginal  math.CO  measure  memory-management  multi  news  nibble  nitty-gritty  nostalgia  numerics  objektbuch  ocaml-sml  oly  oly-programming  optimization  org:biz  org:bleg  org:bv  org:com  org:mag  organization  organizing  papers  pdf  people  performance  pls  pragmatic  pro-rata  problem-solving  profile  programming  puzzles  python  q-n-a  qra  quixotic  quora  r-lang  random  rec-math  recommendations  recruiting  reference  reflection  roadmap  russia  rust  scala  sci-comp  sentiment  skeleton  software  stackex  startups  stock-flow  stream  strings  stylized-facts  subculture  summary  sv  systems  tcs  tech  techtariat  the-trenches  tidbits  time  tools  top-n  tracker  trees  tricki  tricks  tutorial  types  ui  unit  usaco-ioi  visualization  wiki  working-stiff  yak-shaving  yoga  👳  🖥 

Copy this bookmark: