abstraction   1833

« earlier    

C Is Not a Low-level Language - ACM Queue
In the wake of the recent Meltdown and Spectre vulnerabilities, it's worth spending some time looking at root causes. Both of these vulnerabilities involved processors speculatively executing instructions past some kind of access check and allowing the attacker to observe the results via a side channel. The features that led to these vulnerabilities, along with several others, were added to let C programmers continue to believe they were programming in a low-level language, when this hasn't been the case for decades.
c  abstraction  complexity  optimization  cpu_architecture  languages 
15 days ago by cdzombak
oshyshko/uio: a Clojure library for accessing HDFS, S3, SFTP and other file systems via a single API
GitHub is where people build software. More than 27 million people use GitHub to discover, fork, and contribute to over 80 million projects.
clojure  files  library  api  abstraction  storage  protocol 
22 days ago by orlin
Props in Network Theory | Azimuth
We start with circuits made solely of ideal perfectly conductive wires. Then we consider circuits with passive linear components like resistors, capacitors and inductors. Finally we turn on the power and consider circuits that also have voltage and current sources.

And here’s the cool part: each kind of circuit corresponds to a prop that pure mathematicians would eventually invent on their own! So, what’s good for engineers is often mathematically natural too.

commentary: while abstract, it might be worth trying to understand this stuff
network-theory  abstraction  rather-interesting  models-and-modes  circles-and-arrows  bond-diagrams  to-write-about  to-understand  functional-programming  category-theory  via:Vaguery 
26 days ago by WMTrenfield
Props in Network Theory | Azimuth
We start with circuits made solely of ideal perfectly conductive wires. Then we consider circuits with passive linear components like resistors, capacitors and inductors. Finally we turn on the power and consider circuits that also have voltage and current sources.

And here’s the cool part: each kind of circuit corresponds to a prop that pure mathematicians would eventually invent on their own! So, what’s good for engineers is often mathematically natural too.
network-theory  abstraction  rather-interesting  models-and-modes  circles-and-arrows  bond-diagrams  to-write-about  to-understand  functional-programming  category-theory 
26 days ago by Vaguery
Is the human brain analog or digital? - Quora
The brain is neither analog nor digital, but works using a signal processing paradigm that has some properties in common with both.
 
Unlike a digital computer, the brain does not use binary logic or binary addressable memory, and it does not perform binary arithmetic. Information in the brain is represented in terms of statistical approximations and estimations rather than exact values. The brain is also non-deterministic and cannot replay instruction sequences with error-free precision. So in all these ways, the brain is definitely not "digital".
 
At the same time, the signals sent around the brain are "either-or" states that are similar to binary. A neuron fires or it does not. These all-or-nothing pulses are the basic language of the brain. So in this sense, the brain is computing using something like binary signals. Instead of 1s and 0s, or "on" and "off", the brain uses "spike" or "no spike" (referring to the firing of a neuron).
q-n-a  qra  expert-experience  neuro  neuro-nitgrit  analogy  deep-learning  nature  discrete  smoothness  IEEE  bits  coding-theory  communication  trivia  bio  volo-avolo  causation  random  order-disorder  ems  models  methodology  abstraction  nitty-gritty  computation  physics  electromag  scale  coarse-fine 
6 weeks ago by nhaliday
On Abstraction – Zach Tellman
How abstractions work.
Particularly in software.
abstraction  youtube  video 
6 weeks ago by drmeme
What we talk about when we talk about monads
This paper is not a monad tutorial. It will not tell you what a monad is. Instead, it helps you understand how computer scientists and programmers talk about monads and why they do so. To answer these questions, we review the history of monads in the context of programming and study the development through the perspectives of philosophy of science, philosophy of mathematics and cognitive sciences.
monad  monads  metaphor  philosophy  science  abstraction  programming 
6 weeks ago by drmeme
Lessons from Optics, The Other Deep Learning – arg min blog
Trying to frame our discussions of deep learning science, when we are still pre-newtonian in a lot of it...
deep  machine  learning  abstraction  mental  model  design  science  epistemology 
7 weeks ago by asteroza
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
7 weeks ago by nhaliday
The one ring problem: abstraction and our quest for power
"Quite a lot of papers would come up with something they wanted to do, show that existing designs were incapable of doing it, then design some more powerful system where they could.

I believe this thought process is a common failing among programmers."
programming  piperesearch  abstraction 
10 weeks ago by mechazoidal
Programming with a love of the implicit – Signal v. Noise
DHH votes for implicit over explicit code.

I vote for explicit that supplies what implicit delivers w/o hiding so much. Good functional languages provide that.
rails  ruby  abstraction 
10 weeks ago by scottnelsonsmith

« earlier    

related tags

1950s  1970s  1990s  1994  20014  2007  2008  2010  2011  2014  2015  2017  2018  5*  :/  abstratction_wrong  acceleration  acm  acmtariat  action  adapter  adversarial  ai-control  ai  ai_design  alanturing  alchemy  algorithms  alienation  alignment  analogy  analysis  andy-j-ko  animation  anisshivani  anniversary  antoninegri  apache  api  architecture  arms  art  article  artificial-intelligence  artificialintelligence  austerity  authoritarianism  automation  avatar  babel  bestpractice  bestpractices  biases  big-yud  bigdata  bio  bits  blockchain  bond-diagrams  books  bostrom  bottomup  branch  bretvictor  broadcast  build  by  c  cad  capitalism  career  category-theory  cathyoneil  causation  center  change  chaos  chart  circles-and-arrows  civilrights  classification  clever-rats  clojure  cloud  coarse-fine  code  codex  coding-theory  coding  cog-psych  cognition  coherence  communication  comparison  competition  compiler  complex-systems  complexity  composition-decomposition  comprehension  compsci  computation  computer-science  computer-vision  computing  conceptual-vocab  concurrency  conference  connectionist  connectivity  consciousness  consider:looking-to-see  contemporary_art  contradiction  contrarianism  control  controlsociety  conversation  convexity-curvature  conway  cooperate-defect  cooperunion  coordination  counterexample  cpu  cpu_architecture  crash  credit  crisis  critique  crux  csstricks  culturalturn  cyberfeminism  cycles  danabramov  data  death  debate  debt  decentralized  decision-theory  deep-learning  deep-materialism  deep  design  designpatterns  detail-architecture  deterrence  dev  development  difference  digital  digitization  direction  discipline  discrete  distribution  document  documentation  donnaharaway  draw  drawing  dry  duplication  dynamic  economics  economy  eden-heaven  eden  education  eea  egalitarianism-hierarchy  egt  electromag  elixir  emancipatory  embodiment  emergence  ems  emulation  endogenous-exogenous  endurance  engineering  epistemic  epistemology  error  essay  evolution  evopsych  exemplary  existence  expert-experience  explanans  extend  extension  extensive  extraction  facebook  farmers-and-foragers  feature-extraction  features  feedback  files  finance  flexibility  flow  flux-stasis  folklore  formal-values  fourier  frameworks  frequency  fridayfrontend  frontier  functional-programming  functor  futurism  fuzzylogic  game_of_life  gender  generalization  gillesdeleuze  git  glass  global  go  golang  gordonmattaclark  greece  gregory-clark  growth-econ  gui  hanson  hardware  hash  haskell-template-haskell  haskell  have_read  heavy-industry  helenhester  history  hmm  horn  how-to-solve-it  howto  humanity  hybrid  hyperreality  hypertext  iceland  idris  ieee  iidness  imagination  immigration  important  indexdb  indirection  individualism-collectivism  inequality  information  innovation  insight  institutions  integration  intelligence  intensive  interaction  interconnectivity  interface  interfaces  international  intricacy  intuition  inversion  investment  invisible  iq  iteration-recursion  jackwitten  janetsobel  javascript  jeanbaudrillard  johndewey  jquery  js  jsx  jula-evans  khan  labor  langsec  language  languages  large-factor  learning  legacy  lesswrong  library  links  list  lists  local-global  local  localstorage  long-short-run  loop  lower-bounds  lowlevel  machine-learning  machine  machine_learning  macros  magnitude  malthus  marcelduchamp  marginal  market  marvinminsky  material  materiality  math  mckenziewark  meaning  media  mediation  meltdown  memory  mental  mesos  metameta  metaphor  methodology  michaelhardt  military  mind  miniaturization  miri-cfar  model-class  model-organism  model  models-and-modes  models  moloch  moments  monad  monads  monoid  mooreslaw  multi  multiples  multiplicative  museum  music  mutation  mvc  nature  near-far  neoliberal  network-structure  network-theory  network  neural_networks  neuro-nitgrit  neuro  news  nibble  nicky_case  nitty-gritty  nonlinearity  number  online  ood  oop  operatingsystem  optics  optimization  optimizing  order-disorder  org:mat  pain  painting  papers  parenting  paulford  pdf  pdo  peace-violence  peggyguggenheim  perception  performance  personalcomputer  personality  perturbation  philosophy  photography  php  pht101  physics  piperesearch  pl  plots  politics  popculture  post  postmodern  postwar  prediction  preprint  press  probability  problemsolving  process  production  profit  programming-environments  programming-language  programming  protocol  provocation  psychology  psychometrics  publishing  puremvc  purescript  purpose  python  q-n-a  qra  quantum  quote  race  rahimi.ali  rails  random  rather-interesting  rationality  ratty  react  readability  realness  realtime  redmonk  redux  refactor  refactoring  reference  reflexivity  regulation  relevance  remote  representation  research  resistance  retrieval  risk  robertmotherwell  robust  roleplay  roni  ruby  rust  sadieplant  sandimetz  scale  science  scientific-method  search  secondlife  security  selection  selfadjusting  separation-of-concerns  service  sex  sha  shadow  shift  simplevseasy  simplicity  simulation  singularity  smoothness  software-architecture  software-design  software  space  spectre  speculation  speed  speedometer  spinoza  state-of-art  state  statistics  storage  strong  struct  structure  studying  subculture  suffer  surface  survey  symbol  systems  talks  tasrillsieyes  technology  technomaterial  telecommunications  thinking  threat-modeling  time-preference  time  timing  to-understand  to-write-about  tool  topdown  track-record  transformation  transformative.  transmission  trivia  tutorial  twitter  ui  uncertainty  unfathomable  unit  universalism-particularism  unreality  urban-rural  value  values  vector  video  videos  virtual  virtualization  visualization  volo-avolo  war  warelogging  waves  weak  wealth  webpack  websql  wiki  winner-take-all  wisdom  wrapper  writing  x86  xerox  youtube  zedshaw  zero-positive-sum 

Copy this bookmark:



description:


tags: