nhaliday + π₯   180

 Β« earlier
[Tutorial] A way to Practice Competitive Programming : From Rating 1000 to 2400+ - Codeforces
this guy really didn't take that long to reach red..., as of today he's done 20 contests in 2y to my 44 contests in 7y (w/ a long break)...>_>

tho he has 3 times as many submissions as me. maybe he does a lot of virtual rounds?

some snippets from the PDF guide linked:
1400-1900:
To be rating 1900, skills as follows are needed:
- You know and can use major algorithms like these:
Brute force DP DFS BFS Dijkstra
Binary Indexed Tree nCr, nPr Mod inverse Bitmasks Binary Search
- You can code faster (For example, 5 minutes for R1100 problems, 10 minutes for
R1400 problems)

If you are not good at fast-coding and fast-debugging, you should solve AtCoder problems. Actually, and statistically, many Japanese are good at fast-coding relatively while not so good at solving difficult problems. I think thatβs because of AtCoder.

I recommend to solve problem C and D in AtCoder Beginner Contest. On average, if you can solve problem C of AtCoder Beginner Contest within 10 minutes and problem D within 20 minutes, you are Div1 in FastCodingForces :)

...

Interestingly, typical problems are concentrated in Div2-only round problems. If you are not good at Div2-only round, it is likely that you are not good at using typical algorithms, especially 10 algorithms that are written above.

If you can use some typical problem but not good at solving more than R1500 in Codeforces, you should begin TopCoder. This type of practice is effective for people who are good at Div.2 only round but not good at Div.1+Div.2 combined or Div.1+Div.2 separated round.

Sometimes, especially in Div1+Div2 round, some problems need mathematical concepts or thinking. Since there are a lot of problems which uses them (and also light-implementation!) in TopCoder, you should solve TopCoder problems.

I recommend to solve Div1Easy of recent 100 SRMs. But some problems are really difficult, (e.g. even red-ranked coder could not solve) so before you solve, you should check how many percent of people did solve this problem. You can use https://competitiveprogramming.info/ to know some informations.

1900-2200:
To be rating 2200, skills as follows are needed:
- You know and can use 10 algorithms which I stated in pp.11 and segment trees
(including lazy propagations)
- You can solve problems very fast: For example, 5 mins for R1100, 10 mins for
R1500, 15 mins for R1800, 40 mins for R2000.
- You have decent skills for mathematical-thinking or considering problems
- Strong mental which can think about the solution more than 1 hours, and donβt give up even if you are below average in Div1 in the middle of the contest

This is only my way to practice, but I did many virtual contests when I was rating 2000. In this page, virtual contest does not mean βVirtual Participationβ in Codeforces. It means choosing 4 or 5 problems which the difficulty is near your rating (For example, if you are rating 2000, choose R2000 problems in Codeforces) and solve them within 2 hours. You can use https://vjudge.net/. In this website, you can make virtual contests from problems on many online judges. (e.g. AtCoder, Codeforces, Hackerrank, Codechef, POJ, ...)

If you cannot solve problem within the virtual contests and could not be able to find the solution during the contest, you should read editorial. Google it. (e.g. If you want to know editorial of Codeforces Round #556 (Div. 1), search βCodeforces Round #556 editorialβ in google) There is one more important thing to gain rating in Codeforces. To solve problem fast, you should equip some coding library (or template code). For example, I think that equipping segment tree libraries, lazy segment tree libraries, modint library, FFT library, geometry library, etc. is very effective.

2200 to 2400:
Rating 2200 and 2400 is actually very different ...

To be rating 2400, skills as follows are needed:
- You should have skills that stated in previous section (rating 2200)
- You should solve difficult problems which are only solved by less than 100 people in Div1 contests

...

At first, there are a lot of educational problems in AtCoder. I recommend you should solve problem E and F (especially 700-900 points problem in AtCoder) of AtCoder Regular Contest, especially ARC058-ARC090. Though old AtCoder Regular Contests are balanced for βconsideringβ and βtypicalβ, but sadly, AtCoder Grand Contest and recent AtCoder Regular Contest problems are actually too biased for considering I think, so I donβt recommend if your goal is gain rating in Codeforces. (Though if you want to gain rating more than 2600, you should solve problems from AtCoder Grand Contest)

For me, actually, after solving AtCoder Regular Contests, my average performance in CF virtual contest increased from 2100 to 2300 (I could not reach 2400 because start was early)

If you cannot solve problems, I recommend to give up and read editorial as follows:
Point value 600 700 800 900 1000-
CF rating R2000 R2200 R2400 R2600 R2800
Time to editorial 40 min 50 min 60 min 70 min 80 min

If you solve AtCoder educational problems, your skills of competitive programming will be increased. But there is one more problem. Without practical skills, you rating wonβt increase. So, you should do 50+ virtual participations (especially Div.1) in Codeforces. In virtual participation, you can learn how to compete as a purple/orange-ranked coder (e.g. strategy) and how to use skills in Codeforces contests that you learned in AtCoder. I strongly recommend to read editorial of all problems except too difficult one (e.g. Less than 30 people solved in contest) after the virtual contest. I also recommend to write reflections about strategy, learns and improvements after reading editorial on notebooks after the contests/virtual.

In addition, about once a week, I recommend you to make time to think about much difficult problem (e.g. R2800 in Codeforces) for couple of hours. If you could not reach the solution after thinking couple of hours, I recommend you to read editorial because you can learn a lot. Solving high-level problems may give you chance to gain over 100 rating in a single contest, but also can give you chance to solve easier problems faster.
oly  oly-programming  problem-solving  learning  practice  accretion  strategy  hmm  pdf  guide  reflection  advice  wire-guided  marginal  stylized-facts  speed  time  cost-benefit  tools  multi  sleuthin  review  comparison  puzzles  contest  aggregator  recommendations  objektbuch  time-use  growth  studying  π₯  π³  yoga
august 2019 by nhaliday
The 'science' of training in competitive programming - Codeforces
"Hard problems" is subjective. A good rule of thumb for learning problem solving (at least according to me) is that your problem selection is good if you fail to solve roughly 50% of problems you attempt. Anything in [20%,80%] should still be fine, although many people have problems staying motivated if they fail too often. Read solutions for problems you fail to solve.

(There is some actual math behind this. Hopefully one day I'll have the time to write it down.)
- misof in a comment
--
I don't believe in any of things like "either you solve it in 30mins β few hours, or you never solve it at all". There are some magic at first glance algorithms like polynomial hashing, interval tree or FFT (which is magic even at tenth glance :P), but there are not many of them and vast majority of algorithms are possible to be invented on our own, for example dp. In high school I used to solve many problems from IMO and PMO and when I didn't solve a problem I tried it once again for some time. And I have solved some problems after third or sth like that attempt. Though, if we are restricting ourselves to beginners, I think that it still holds true, but it would be better to read solutions after some time, because there are so many other things which we can learn, so better not get stuck at one particular problem, when there are hundreds of other important concepts to be learnt.
oly  oly-programming  problem-solving  learning  practice  accretion  strategy  marginal  wire-guided  stylized-facts  hmm  advice  tactics  time  time-use  cost-benefit  growth  studying  π₯  π³
august 2019 by nhaliday
LeetCode - The World's Leading Online Programming Learning Platform
very much targeted toward interview prep
This data is especially valuable because you get to know a company's interview style beforehand. For example, most questions that appeared in Facebook interviews have short solution typically not more than 30 lines of code. Their interview process focus on your ability to write clean, concise code. On the other hand, Google style interviews lean more on the analytical side and is algorithmic heavy, typically with multiple solutions to a question - each with a different run time complexity.
programming  tech  career  working-stiff  recruiting  interview-prep  algorithms  problem-solving  oly-programming  multi  q-n-a  qra  comparison  stylized-facts  facebook  google  cost-benefit  homo-hetero  startups  organization  alien-character  π₯  contest  puzzles  accretion  transitions  money-for-time
june 2019 by nhaliday
I've written my program but should it take days to get to the answer?
Absolutely not! Each problem has been designed according to a "one-minute rule", which means that although it may take several hours to design a successful algorithm with more difficult problems, an efficient implementation will allow a solution to be obtained on a modestly powered computer in less than one minute.
math  rec-math  math.NT  math.CO  programming  oly  database  community  forum  stream  problem-solving  accretion  puzzles  contest  π₯  π³
june 2019 by nhaliday
Which benchmark programs are faster? | Computer Language Benchmarks Game
old:
https://salsa.debian.org/benchmarksgame-team/archive-alioth-benchmarksgame
https://web.archive.org/web/20170331153459/http://benchmarksgame.alioth.debian.org/
includes Scala

very outdated but more languages: https://web.archive.org/web/20110401183159/http://shootout.alioth.debian.org:80/

OCaml seems to offer the best tradeoff of performance vs parsimony (Haskell not so much :/)
https://blog.chewxy.com/2019/02/20/go-is-average/
http://blog.gmarceau.qc.ca/2009/05/speed-size-and-dependability-of.html
old official: https://web.archive.org/web/20130731195711/http://benchmarksgame.alioth.debian.org/u64q/code-used-time-used-shapes.php
https://web.archive.org/web/20121125103010/http://shootout.alioth.debian.org/u64q/code-used-time-used-shapes.php

other PL benchmarks:
https://github.com/kostya/benchmarks
BF 2.0:
Kotlin, C++ (GCC), Rust < Nim, D (GDC,LDC), Go, MLton < Crystal, Go (GCC), C# (.NET Core), Scala, Java, OCaml < D (DMD) < C# Mono < Javascript V8 < F# Mono, Javascript Node, Haskell (MArray) << LuaJIT << Python PyPy < Haskell < Racket <<< Python << Python3
mandel.b:
C++ (GCC) << Crystal < Rust, D (GDC), Go (GCC) < Nim, D (LDC) << C# (.NET Core) < MLton << Kotlin << OCaml << Scala, Java << D (DMD) << Go << C# Mono << Javascript Node << Haskell (MArray) << LuaJIT < Python PyPy << F# Mono <<< Racket
https://github.com/famzah/langs-performance
C++, Rust, Java w/ custom non-stdlib code < Python PyPy < C# .Net Core < Javscript Node < Go, unoptimized C++ (no -O2) << PHP << Java << Python3 << Python
comparison  pls  programming  performance  benchmarks  list  top-n  ranking  systems  time  multi  π₯  cost-benefit  tradeoffs  data  analysis  plots  visualization  measure  intricacy  parsimony  ocaml-sml  golang  rust  jvm  javascript  c(pp)  functional  haskell  backup  scala  realness  generalization  accuracy  techtariat  crosstab  database  repo  objektbuch  static-dynamic  gnu  mobile
december 2018 by nhaliday
design patterns - What is MVC, really? - Software Engineering Stack Exchange
The model manages fundamental behaviors and data of the application. It can respond to requests for information, respond to instructions to change the state of its information, and even to notify observers in event-driven systems when information changes. This could be a database, or any number of data structures or storage systems. In short, it is the data and data-management of the application.

The view effectively provides the user interface element of the application. It'll render data from the model into a form that is suitable for the user interface.

The controller receives user input and makes calls to model objects and the view to perform appropriate actions.

...

Though this answer has 21 upvotes, I find the sentence "This could be a database, or any number of data structures or storage systems. (tl;dr : it's the data and data-management of the application)" horrible. The model is the pure business/domain logic. And this can and should be so much more than data management of an application. I also differentiate between domain logic and application logic. A controller should not ever contain business/domain logic or talk to a database directly.
q-n-a  stackex  explanation  concept  conceptual-vocab  structure  composition-decomposition  programming  engineering  best-practices  pragmatic  jargon  thinking  metabuch  working-stiff  tech  π₯  checklists  code-organizing  abstraction
october 2017 by nhaliday
Superintelligence Risk Project Update II
https://www.jefftk.com/p/superintelligence-risk-project-update

https://www.jefftk.com/p/conversation-with-michael-littman
For example, I asked him what he thought of the idea that to we could get AGI with current techniques, primarily deep neural nets and reinforcement learning, without learning anything new about how intelligence works or how to implement it ("Prosaic AGI" [1]). He didn't think this was possible, and believes there are deep conceptual issues we still need to get a handle on. He's also less impressed with deep learning than he was before he started working in it: in his experience it's a much more brittle technology than he had been expecting. Specifically, when trying to replicate results, he's often found that they depend on a bunch of parameters being in just the right range, and without that the systems don't perform nearly as well.

The bottom line, to him, was that since we are still many breakthroughs away from getting to AGI, we can't productively work on reducing superintelligence risk now.

He told me that he worries that the AI risk community is not solving real problems: they're making deductions and inferences that are self-consistent but not being tested or verified in the world. Since we can't tell if that's progress, it probably isn't. I asked if he was referring to MIRI's work here, and he said their work was an example of the kind of approach he's skeptical about, though he wasn't trying to single them out. [2]

https://www.jefftk.com/p/conversation-with-an-ai-researcher
Earlier this week I had a conversation with an AI researcher [1] at one of the main industry labs as part of my project of assessing superintelligence risk. Here's what I got from them:

They see progress in ML as almost entirely constrained by hardware and data, to the point that if today's hardware and data had existed in the mid 1950s researchers would have gotten to approximately our current state within ten to twenty years. They gave the example of backprop: we saw how to train multi-layer neural nets decades before we had the computing power to actually train these nets to do useful things.

Similarly, people talk about AlphaGo as a big jump, where Go went from being "ten years away" to "done" within a couple years, but they said it wasn't like that. If Go work had stayed in academia, with academia-level budgets and resources, it probably would have taken nearly that long. What changed was a company seeing promising results, realizing what could be done, and putting way more engineers and hardware on the project than anyone had previously done. AlphaGo couldn't have happened earlier because the hardware wasn't there yet, and was only able to be brought forward by massive application of resources.

https://www.jefftk.com/p/superintelligence-risk-project-conclusion
Summary: I'm not convinced that AI risk should be highly prioritized, but I'm also not convinced that it shouldn't. Highly qualified researchers in a position to have a good sense the field have massively different views on core questions like how capable ML systems are now, how capable they will be soon, and how we can influence their development. I do think these questions are possible to get a better handle on, but I think this would require much deeper ML knowledge than I have.
ratty  core-rats  ai  risk  ai-control  prediction  expert  machine-learning  deep-learning  speedometer  links  research  research-program  frontier  multi  interview  deepgoog  games  hardware  performance  roots  impetus  chart  big-picture  state-of-art  reinforcement  futurism  π€  π₯  expert-experience  singularity  miri-cfar  empirical  evidence-based  speculation  volo-avolo  clever-rats  acmtariat  robust  ideas  crux  atoms  detail-architecture  software  gradient-descent
july 2017 by nhaliday
Lessons from a yearβs worth of hiring data | Aline Lerner's Blog
- typos and grammatical errors matter more than anything else
[I feel like this is probably broadly applicable to other application processes, in the sense that it's more important than you might guess]
- having attended a top computer science school doesnβt matter
- listing side projects on your resume isnβt as advantageous as expected
- GPA doesnβt seem to matter
career  tech  sv  data  analysis  objektbuch  jobs  π₯  tactics  empirical  recruiting  working-stiff  transitions  progression  interview-prep
december 2016 by nhaliday
I don't understand Python's Asyncio | Armin Ronacher's Thoughts and Writings
Man that thing is complex and it keeps getting more complex. I do not have the mental capacity to casually work with asyncio. It requires constantly updating the knowledge with all language changes and it has tremendously complicated the language. It's impressive that an ecosystem is evolving around it but I can't help but get the impression that it will take quite a few more years for it to become a particularly enjoyable and stable development experience.

What landed in 3.5 (the actual new coroutine objects) is great. In particular with the changes that will come up there is a sensible base that I wish would have been in earlier versions. The entire mess with overloading generators to be coroutines was a mistake in my mind. With regards to what's in asyncio I'm not sure of anything. It's an incredibly complex thing and super messy internally. It's hard to comprehend how it works in all details. When you can pass a generator, when it has to be a real coroutine, what futures are, what tasks are, how the loop works and that did not even come to the actual IO part.

The worst part is that asyncio is not even particularly fast. David Beazley's live demo hacked up asyncio replacement is twice as fast as it. There is an enormous amount of complexity that's hard to understand and reason about and then it fails on it's main promise. I'm not sure what to think about it but I know at least that I don't understand asyncio enough to feel confident about giving people advice about how to structure code for it.
python  libraries  review  concurrency  programming  pls  rant  π₯  techtariat  intricacy  design  confusion  performance  critique
october 2016 by nhaliday
Before the Startup
You can, however, trust your instincts about people. And in fact one of the most common mistakes young founders make is not to do that enough. They get involved with people who seem impressive, but about whom they feel some misgivings personally. Later when things blow up they say "I knew there was something off about him, but I ignored it because he seemed so impressive."

If you're thinking about getting involved with someoneβas a cofounder, an employee, an investor, or an acquirerβand you have misgivings about them, trust your gut. If someone seems slippery, or bogus, or a jerk, don't ignore it.

This is one case where it pays to be self-indulgent. Work with people you genuinely like, and you've known long enough to be sure.
advice  startups  business  paulg  yc  essay  π₯  instinct  long-term  techtariat  barons  entrepreneurialism
october 2016 by nhaliday
daemonology.net
Colin Percivel's homepage
interesting links to Hacker News highlights
people  security  engineering  programming  links  blog  stream  hn  hacker  π₯  techtariat  cracker-prog
october 2016 by nhaliday
HN: the good parts

And yet, I havenβt found a public internet forum with better technical commentary. On topics I'm familiar with, while it's rare that a thread will have even a single comment that's well-informed, when those comments appear, they usually float to the top. On other forums, well-informed comments are either non-existent or get buried by reasonable sounding but totally wrong comments when they appear, and they appear even more rarely than on HN.

...

I compiled a very abbreviated list of comments I like because comments seem to get lost. If you write a blog post, people will refer it years later, but comments mostly disappear. I think thatβs sad -- thereβs a lot of great material on HN (and yes, even more not-so-great material).
hn  forum  subculture  list  contrarianism  community  dan-luu  top-n  π₯  techtariat
october 2016 by nhaliday
What You Can't Say
E Pur Si Muove:
http://blog.samaltman.com/e-pur-si-muove
https://archive.is/yE75n

Sam Altman and the fear of political correctness: http://marginalrevolution.com/marginalrevolution/2017/12/sam-altman-fear-political-correctness.html
Earlier this year, I noticed something in China that really surprised me. I realized I felt more comfortable discussing controversial ideas in Beijing than in San Francisco. I didnβt feel completely comfortableβthis was China, after allβjust more comfortable than at home.

That showed me just how bad things have become, and how much things have changed since I first got started here in 2005.

It seems easier to accidentally speak heresies in San Francisco every year. Debating a controversial idea, even if you 95% agree with the consensus side, seems ill-advised.
--
And so it runs with shadow prices for speech, including rights to say things and to ask questions. Whatever you are free to say in America, you have said many times already, and the marginal value of exercising that freedom yet again doesnβt seem so high. But you show up in China, and wow, your pent-up urges are not forbidden topics any more. Just do be careful with your mentions of Uncle Xi, Taiwan, Tibet, Uighur terrorists, and disappearing generals. That said, in downtown Berkeley you can speculate rather freely on whether China will someday end up as a Christian nation, and hardly anybody will be offended.

For this reason, where we live typically seems especially unfree when it comes to speech. And when I am in China, I usually have so, so many new dishes I want to sample, including chestnuts and pumpkin.

https://medium.com/@jasoncrawford/what-people-think-you-cant-say-in-silicon-valley-a6d04f632a00

Baidu's Robin Li is Helping China Win the 21st Century: http://time.com/5107485/baidus-robin-li-helping-china-win-21st-century/
Therein lies the contradiction at the heart of Chinaβs efforts to forge the future: the country has the worldβs most severe restrictions on Internet freedom, according to advocacy group Freedom House. China employs a highly sophisticated censorship apparatus, dubbed the Great Firewall, to snuff out any content deemed critical or inappropriate. Google, Facebook and Twitter, as well as news portals like the New York Times, Bloomberg and TIME, are banned. Manned by an army of 2 million online censors, the Great Firewall gives outsiders the impression of deathly silence within.

But in fact, business thrives inside the firewallβs confinesβon its guardiansβ terms, of courseβand the restrictions have not appeared to stymie progress. βIt turns out you donβt need to know the truth of what happened in Tiananmen Square to develop a great smartphone app,β says Kaiser Kuo, formerly Baiduβs head of international communications and a co-host of Sinica, an authoritative podcast on China. βThere is a deep hubris in the West about this.β The central government in Beijing has a fearsome capacity to get things done and is willing to back its policy priorities with hard cash. The benefits for companies willing or able to go along with its whims are clear. The question for Baiduβand for Liβis how far it is willing to go.

The work ethic in Chinese tech companies far outpaces their US rivals
- MICHAEL MORITZ

The declaration by Didi, the Chinese ride-hailing company, that delivery business Meituanβs decision to launch a rival service would spark βthe war of the centuryβ, throws the intensive competition between the countryβs technology companies into stark relief.

The call to arms will certainly act as a spur for Didi employees, although it is difficult to see how they can work even harder. But what it does reveal is the striking contrast between working life in Chinaβs technology companies and their counterparts in the west.

In California, the blogosphere has been full of chatter about the inequity of life. Some of this, especially for women, is true and for certain individuals their day of reckoning has been long overdue. But many of the soul-sapping discussions seem like unwarranted distractions. In recent months, there have been complaints about the political sensibilities of speakers invited to address a corporate audience; debates over the appropriate length of paternity leave or work-life balances; and grumbling about the need for a space for musical jam sessions. These seem like the concerns of a society that is becoming unhinged.

...

While male chauvinism is still common in the home, women have an easier time gaining recognition and respect in Chinaβs technology workplaces β although they are still seriously under-represented in the senior ranks. Many of these high-flyers only see their children β who are often raised by a grandmother or nanny β for a few minutes a day. There are even examples of husbands, eager to spend time with their wives, who travel with them on business trips as a way to maintain contact.

What I learned from 5 weeks in Beijing + Shanghai:

- startup creation + velocity dwarfs anything in SF
- no one in China I met is remotely worried about U.S. or possibly even cares
- scale feels about 20x of SF
- endless energy

https://www.reuters.com/article/us-china-economy-tech-analysis/china-goes-on-tech-hiring-binge-and-wages-soar-closing-gap-with-silicon-valley-idUSKBN1FD37S

https://archive.is/JpHik
Western values are freeriding on Western innovation.
--
Comparatively unimpeded pursuit of curiosity into innovation is a Western value that pays the carriage fare.
--
True. A lot of values are worthwhile in certain contexts but should never have been scaled.

Diversity, "social mobility", iconoclasm
--
--
but due to military and technological victory over its competitors
--
There's something to be said for Western social trust as well, though that's an institution more than an idea
essay  yc  culture  society  philosophy  reflection  contrarianism  meta:rhetoric  thiel  embedded-cognition  paulg  water  π₯  techtariat  barons  info-dynamics  realness  truth  straussian  open-closed  preference-falsification  individualism-collectivism  courage  orwellian  multi  backup  econotariat  marginal-rev  commentary  links  quotes  hard-tech  skunkworks  enhancement  genetics  biotech  sv  tech  trends  civil-liberty  exit-voice  longevity  environment  innovation  frontier  politics  identity-politics  zeitgeist  china  asia  sinosphere  censorship  news  org:lite  org:biz  debate  twitter  social  social-norms  gender  sex  sexuality  org:med  blowhards  drama  google  poll  descriptive  values  rot  humility  tradeoffs  government  the-great-west-whale  internet  occident  org:rec  org:anglo  venture  vitality  gibbon  competition  investing  martial  discussion  albion  journos-pundits  europe  ideology  free-riding  degrees-of-freedom  land  gnon  peace-violence  diversity  mobility  tradition  reason  curiosity  trust  n-factor  institutions  th
october 2016 by nhaliday
per page:    20 β§ 40 β§ 80 β§ 120 β§ 160

bundles : stars β§ techie

Copy this bookmark:

description:

tags: