nhaliday + publishing   64

An Efficiency Comparison of Document Preparation Systems Used in Academic Research and Development
The choice of an efficient document preparation system is an important decision for any academic researcher. To assist the research community, we report a software usability study in which 40 researchers across different disciplines prepared scholarly texts with either Microsoft Word or LaTeX. The probe texts included simple continuous text, text with tables and subheadings, and complex text with several mathematical equations. We show that LaTeX users were slower than Word users, wrote less text in the same amount of time, and produced more typesetting, orthographical, grammatical, and formatting errors. On most measures, expert LaTeX users performed even worse than novice Word users. LaTeX users, however, more often report enjoying using their respective software. We conclude that even experienced LaTeX users may suffer a loss in productivity when LaTeX is used, relative to other document preparation systems. Individuals, institutions, and journals should carefully consider the ramifications of this finding when choosing document preparation strategies, or requiring them of authors.

...

However, our study suggests that LaTeX should be used as a document preparation system only in cases in which a document is heavily loaded with mathematical equations. For all other types of documents, our results suggest that LaTeX reduces the user’s productivity and results in more orthographical, grammatical, and formatting errors, more typos, and less written text than Microsoft Word over the same duration of time. LaTeX users may argue that the overall quality of the text that is created with LaTeX is better than the text that is created with Microsoft Word. Although this argument may be true, the differences between text produced in more recent editions of Microsoft Word and text produced in LaTeX may be less obvious than it was in the past. Moreover, we believe that the appearance of text matters less than the scientific content and impact to the field. In particular, LaTeX is also used frequently for text that does not contain a significant amount of mathematical symbols and formula. We believe that the use of LaTeX under these circumstances is highly problematic and that researchers should reflect on the criteria that drive their preferences to use LaTeX over Microsoft Word for text that does not require significant mathematical representations.

...

A second decision criterion that factors into the choice to use a particular software system is reflection about what drives certain preferences. A striking result of our study is that LaTeX users are highly satisfied with their system despite reduced usability and productivity. From a psychological perspective, this finding may be related to motivational factors, i.e., the driving forces that compel or reinforce individuals to act in a certain way to achieve a desired goal. A vital motivational factor is the tendency to reduce cognitive dissonance. According to the theory of cognitive dissonance, each individual has a motivational drive to seek consonance between their beliefs and their actual actions. If a belief set does not concur with the individual’s actual behavior, then it is usually easier to change the belief rather than the behavior [6]. The results from many psychological studies in which people have been asked to choose between one of two items (e.g., products, objects, gifts, etc.) and then asked to rate the desirability, value, attractiveness, or usefulness of their choice, report that participants often reduce unpleasant feelings of cognitive dissonance by rationalizing the chosen alternative as more desirable than the unchosen alternative [6, 7]. This bias is usually unconscious and becomes stronger as the effort to reject the chosen alternative increases, which is similar in nature to the case of learning and using LaTeX.

...

Given these numbers it remains an open question to determine the amount of taxpayer money that is spent worldwide for researchers to use LaTeX over a more efficient document preparation system, which would free up their time to advance their respective field. Some publishers may save a significant amount of money by requesting or allowing LaTeX submissions because a well-formed LaTeX document complying with a well-designed class file (template) is much easier to bring into their publication workflow. However, this is at the expense of the researchers’ labor time and effort. We therefore suggest that leading scientific journals should consider accepting submissions in LaTeX only if this is justified by the level of mathematics presented in the paper. In all other cases, we think that scholarly journals should request authors to submit their documents in Word or PDF format. We believe that this would be a good policy for two reasons. First, we think that the appearance of the text is secondary to the scientific merit of an article and its impact to the field. And, second, preventing researchers from producing documents in LaTeX would save time and money to maximize the benefit of research and development for both the research team and the public.

[ed.: I sense some salt.

And basically no description of how "# errors" was calculated.]

https://news.ycombinator.com/item?id=8797002
I question the validity of their methodology.
At no point in the paper is exactly what is meant by a "formatting error" or a "typesetting error" defined. From what I gather, the participants in the study were required to reproduce the formatting and layout of the sample text. In theory, a LaTeX file should strictly be a semantic representation of the content of the document; while TeX may have been a raw typesetting language, this is most definitely not the intended use case of LaTeX and is overall a very poor test of its relative advantages and capabilities.
The separation of the semantic definition of the content from the rendering of the document is, in my opinion, the most important feature of LaTeX. Like CSS, this allows the actual formatting to be abstracted away, allowing plain (marked-up) content to be written without worrying about typesetting.
Word has some similar capabilities with styles, and can be used in a similar manner, though few Word users actually use the software properly. This may sound like a relatively insignificant point, but in practice, almost every Word document I have seen has some form of inconsistent formatting. If Word disallowed local formatting changes (including things such as relative spacing of nested bullet points), forcing all formatting changes to be done in document-global styles, it would be a far better typesetting system. Also, the users would be very unhappy.
Yes, LaTeX can undeniably be a pain in the arse, especially when it comes to trying to get figures in the right place; however the combination of a simple, semantic plain-text representation with a flexible and professional typesetting and rendering engine are undeniable and completely unaddressed by this study.
--
It seems that the test was heavily biased in favor of WYSIWYG.
Of course that approach makes it very simple to reproduce something, as has been tested here. Even simpler would be to scan the document and run OCR. The massive problem with both approaches (WYSIWYG and scanning) is that you can't generalize any of it. You're doomed repeating it forever.
(I'll also note the other significant issue with this study: when the ratings provided by participants came out opposite of their test results, they attributed it to irrational bias.)

https://www.nature.com/articles/d41586-019-01796-1
Over the past few years however, the line between the tools has blurred. In 2017, Microsoft made it possible to use LaTeX’s equation-writing syntax directly in Word, and last year it scrapped Word’s own equation editor. Other text editors also support elements of LaTeX, allowing newcomers to use as much or as little of the language as they like.

https://news.ycombinator.com/item?id=20191348
study  hmm  academia  writing  publishing  yak-shaving  technical-writing  software  tools  comparison  latex  scholar  regularizer  idk  microsoft  evidence-based  science  desktop  time  efficiency  multi  hn  commentary  critique  news  org:sci  flux-stasis  duplication  metrics  biases 
june 2019 by nhaliday
What's the expected level of paper for top conferences in Computer Science - Academia Stack Exchange
Top. The top level.

My experience on program committees for STOC, FOCS, ITCS, SODA, SOCG, etc., is that there are FAR more submissions of publishable quality than can be accepted into the conference. By "publishable quality" I mean a well-written presentation of a novel, interesting, and non-trivial result within the scope of the conference.

...

There are several questions that come up over and over in the FOCS/STOC review cycle:

- How surprising / novel / elegant / interesting is the result?
- How surprising / novel / elegant / interesting / general are the techniques?
- How technically difficult is the result? Ironically, FOCS and STOC committees have a reputation for ignoring the distinction between trivial (easy to derive from scratch) and nondeterministically trivial (easy to understand after the fact).
- What is the expected impact of this result? Is this paper going to change the way people do theoretical computer science over the next five years?
- Is the result of general interest to the theoretical computer science community? Or is it only of interest to a narrow subcommunity? In particular, if the topic is outside the STOC/FOCS mainstream—say, for example, computational topology—does the paper do a good job of explaining and motivating the results to a typical STOC/FOCS audience?
nibble  q-n-a  overflow  academia  tcs  cs  meta:research  publishing  scholar  lens  properties  cost-benefit  analysis  impetus  increase-decrease  soft-question  motivation  proofs  search  complexity  analogy  problem-solving  elegance  synthesis  hi-order-bits  novelty  discovery 
june 2019 by nhaliday
algorithm, algorithmic, algorithmicx, algorithm2e, algpseudocode = confused - TeX - LaTeX Stack Exchange
algorithm2e is only one currently maintained, but answerer prefers style of algorithmicx, and after perusing the docs, so do I
q-n-a  stackex  libraries  list  recommendations  comparison  publishing  cs  programming  algorithms  tools 
june 2019 by nhaliday
bibliographies - bibtex vs. biber and biblatex vs. natbib - TeX - LaTeX Stack Exchange
- bibtex and biber are external programs that process bibliography information and act (roughly) as the interface between your .bib file and your LaTeX document.
- natbib and biblatex are LaTeX packages that format citations and bibliographies; natbib works only with bibtex, while biblatex (at the moment) works with both bibtex and biber.)

natbib
The natbib package has been around for quite a long time, and although still maintained, it is fair to say that it isn't being further developed. It is still widely used, and very reliable.

Advantages
...
- The resulting bibliography code can be pasted directly into a document (often required for journal submissions). See Biblatex: submitting to a journal.

...

biblatex
The biblatex package is being actively developed in conjunction with the biber backend.

Advantages
*lots*

Disadvantages
- Journals and publishers may not accept documents that use biblatex if they have a house style with its own natbib compatible .bst file.
q-n-a  stackex  latex  comparison  cost-benefit  writing  scholar  technical-writing  yak-shaving  publishing 
may 2019 by nhaliday
soft question - What are good non-English languages for mathematicians to know? - MathOverflow
I'm with Deane here: I think learning foreign languages is not a very mathematically productive thing to do; of course, there are lots of good reasons to learn foreign languages, but doing mathematics is not one of them. Not only are there few modern mathematics papers written in languages other than English, but the primary other language they are written (French) in is pretty easy to read without actually knowing it.

Even though I've been to France several times, my spoken French mostly consists of "merci," "si vous plait," "d'accord" and some food words; I've still skimmed 100 page long papers in French without a lot of trouble.

If nothing else, think of reading a paper in French as a good opportunity to teach Google Translate some mathematical French.
q-n-a  overflow  math  academia  learning  foreign-lang  publishing  science  french  soft-question  math.AG  nibble  quixotic  comparison  language  china  asia  trends 
february 2019 by nhaliday
The Hanson-Yudkowsky AI-Foom Debate - Machine Intelligence Research Institute
How Deviant Recent AI Progress Lumpiness?: http://www.overcomingbias.com/2018/03/how-deviant-recent-ai-progress-lumpiness.html
I seem to disagree with most people working on artificial intelligence (AI) risk. While with them I expect rapid change once AI is powerful enough to replace most all human workers, I expect this change to be spread across the world, not concentrated in one main localized AI system. The efforts of AI risk folks to design AI systems whose values won’t drift might stop global AI value drift if there is just one main AI system. But doing so in a world of many AI systems at similar abilities levels requires strong global governance of AI systems, which is a tall order anytime soon. Their continued focus on preventing single system drift suggests that they expect a single main AI system.

The main reason that I understand to expect relatively local AI progress is if AI progress is unusually lumpy, i.e., arriving in unusually fewer larger packages rather than in the usual many smaller packages. If one AI team finds a big lump, it might jump way ahead of the other teams.

However, we have a vast literature on the lumpiness of research and innovation more generally, which clearly says that usually most of the value in innovation is found in many small innovations. We have also so far seen this in computer science (CS) and AI. Even if there have been historical examples where much value was found in particular big innovations, such as nuclear weapons or the origin of humans.

Apparently many people associated with AI risk, including the star machine learning (ML) researchers that they often idolize, find it intuitively plausible that AI and ML progress is exceptionally lumpy. Such researchers often say, “My project is ‘huge’, and will soon do it all!” A decade ago my ex-co-blogger Eliezer Yudkowsky and I argued here on this blog about our differing estimates of AI progress lumpiness. He recently offered Alpha Go Zero as evidence of AI lumpiness:

...

In this post, let me give another example (beyond two big lumps in a row) of what could change my mind. I offer a clear observable indicator, for which data should have available now: deviant citation lumpiness in recent ML research. One standard measure of research impact is citations; bigger lumpier developments gain more citations that smaller ones. And it turns out that the lumpiness of citations is remarkably constant across research fields! See this March 3 paper in Science:

I Still Don’t Get Foom: http://www.overcomingbias.com/2014/07/30855.html
All of which makes it look like I’m the one with the problem; everyone else gets it. Even so, I’m gonna try to explain my problem again, in the hope that someone can explain where I’m going wrong. Here goes.

“Intelligence” just means an ability to do mental/calculation tasks, averaged over many tasks. I’ve always found it plausible that machines will continue to do more kinds of mental tasks better, and eventually be better at pretty much all of them. But what I’ve found it hard to accept is a “local explosion.” This is where a single machine, built by a single project using only a tiny fraction of world resources, goes in a short time (e.g., weeks) from being so weak that it is usually beat by a single human with the usual tools, to so powerful that it easily takes over the entire world. Yes, smarter machines may greatly increase overall economic growth rates, and yes such growth may be uneven. But this degree of unevenness seems implausibly extreme. Let me explain.

If we count by economic value, humans now do most of the mental tasks worth doing. Evolution has given us a brain chock-full of useful well-honed modules. And the fact that most mental tasks require the use of many modules is enough to explain why some of us are smarter than others. (There’d be a common “g” factor in task performance even with independent module variation.) Our modules aren’t that different from those of other primates, but because ours are different enough to allow lots of cultural transmission of innovation, we’ve out-competed other primates handily.

We’ve had computers for over seventy years, and have slowly build up libraries of software modules for them. Like brains, computers do mental tasks by combining modules. An important mental task is software innovation: improving these modules, adding new ones, and finding new ways to combine them. Ideas for new modules are sometimes inspired by the modules we see in our brains. When an innovation team finds an improvement, they usually sell access to it, which gives them resources for new projects, and lets others take advantage of their innovation.

...

In Bostrom’s graph above the line for an initially small project and system has a much higher slope, which means that it becomes in a short time vastly better at software innovation. Better than the entire rest of the world put together. And my key question is: how could it plausibly do that? Since the rest of the world is already trying the best it can to usefully innovate, and to abstract to promote such innovation, what exactly gives one small project such a huge advantage to let it innovate so much faster?

...

In fact, most software innovation seems to be driven by hardware advances, instead of innovator creativity. Apparently, good ideas are available but must usually wait until hardware is cheap enough to support them.

Yes, sometimes architectural choices have wider impacts. But I was an artificial intelligence researcher for nine years, ending twenty years ago, and I never saw an architecture choice make a huge difference, relative to other reasonable architecture choices. For most big systems, overall architecture matters a lot less than getting lots of detail right. Researchers have long wandered the space of architectures, mostly rediscovering variations on what others found before.

Some hope that a small project could be much better at innovation because it specializes in that topic, and much better understands new theoretical insights into the basic nature of innovation or intelligence. But I don’t think those are actually topics where one can usefully specialize much, or where we’ll find much useful new theory. To be much better at learning, the project would instead have to be much better at hundreds of specific kinds of learning. Which is very hard to do in a small project.

What does Bostrom say? Alas, not much. He distinguishes several advantages of digital over human minds, but all software shares those advantages. Bostrom also distinguishes five paths: better software, brain emulation (i.e., ems), biological enhancement of humans, brain-computer interfaces, and better human organizations. He doesn’t think interfaces would work, and sees organizations and better biology as only playing supporting roles.

...

Similarly, while you might imagine someday standing in awe in front of a super intelligence that embodies all the power of a new age, superintelligence just isn’t the sort of thing that one project could invent. As “intelligence” is just the name we give to being better at many mental tasks by using many good mental modules, there’s no one place to improve it. So I can’t see a plausible way one project could increase its intelligence vastly faster than could the rest of the world.

Takeoff speeds: https://sideways-view.com/2018/02/24/takeoff-speeds/
Futurists have argued for years about whether the development of AGI will look more like a breakthrough within a small group (“fast takeoff”), or a continuous acceleration distributed across the broader economy or a large firm (“slow takeoff”).

I currently think a slow takeoff is significantly more likely. This post explains some of my reasoning and why I think it matters. Mostly the post lists arguments I often hear for a fast takeoff and explains why I don’t find them compelling.

(Note: this is not a post about whether an intelligence explosion will occur. That seems very likely to me. Quantitatively I expect it to go along these lines. So e.g. while I disagree with many of the claims and assumptions in Intelligence Explosion Microeconomics, I don’t disagree with the central thesis or with most of the arguments.)
ratty  lesswrong  subculture  miri-cfar  ai  risk  ai-control  futurism  books  debate  hanson  big-yud  prediction  contrarianism  singularity  local-global  speed  speedometer  time  frontier  distribution  smoothness  shift  pdf  economics  track-record  abstraction  analogy  links  wiki  list  evolution  mutation  selection  optimization  search  iteration-recursion  intelligence  metameta  chart  analysis  number  ems  coordination  cooperate-defect  death  values  formal-values  flux-stasis  philosophy  farmers-and-foragers  malthus  scale  studying  innovation  insight  conceptual-vocab  growth-econ  egalitarianism-hierarchy  inequality  authoritarianism  wealth  near-far  rationality  epistemic  biases  cycles  competition  arms  zero-positive-sum  deterrence  war  peace-violence  winner-take-all  technology  moloch  multi  plots  research  science  publishing  humanity  labor  marginal  urban-rural  structure  composition-decomposition  complex-systems  gregory-clark  decentralized  heavy-industry  magnitude  multiplicative  endogenous-exogenous  models  uncertainty  decision-theory  time-prefer 
april 2018 by nhaliday
Forgotten Books
"read old books"

they have a copy of G.M. Cookson's Aeschylus translations
books  publishing  store  brands  todo  literature  history  early-modern  pre-ww2  britain  aristos  tip-of-tongue  classic  old-anglo  letters  anglosphere  the-classics  big-peeps  canon  database  search  wisdom 
november 2017 by nhaliday
Peer review is younger than you think - Marginal REVOLUTION
I’d like to see a detailed look at actual journal practices, but my personal sense is that editorial review was the norm until fairly recently, not review by a team of outside referees.  In 1956, for instance, the American Historical Review asked for only one submission copy, and it seems the same was true as late as 1970.  I doubt they made the photocopies themselves. Schmidt seems to suggest that the practices of government funders nudged the academic professions into more formal peer review with multiple referee reports.
econotariat  marginal-rev  commentary  data  gbooks  trends  anglo  language  zeitgeist  search  history  mostly-modern  science  meta:science  institutions  academia  publishing  trivia  cocktail  links 
september 2017 by nhaliday
Here Be Sermons | Melting Asphalt
The Costly Coordination Mechanism of Common Knowledge: https://www.lesserwrong.com/posts/9QxnfMYccz9QRgZ5z/the-costly-coordination-mechanism-of-common-knowledge
- Dictatorships all through history have attempted to suppress freedom of the press and freedom of speech. Why is this? Are they just very sensitive? On the other side, the leaders of the Enlightenment fought for freedom of speech, and would not budge an inch against this principle.
- When two people are on a date and want to sleep with each other, the conversation will often move towards but never explicitly discuss having sex. The two may discuss going back to the place of one of theirs, with a different explicit reason discussed (e.g. "to have a drink"), even if both want to have sex.
- Throughout history, communities have had religious rituals that look very similar. Everyone in the village has to join in. There are repetitive songs, repetitive lectures on the same holy books, chanting together. Why, of all the possible community events (e.g. dinner, parties, etc) is this the most common type?
What these three things have in common, is common knowledge - or at least, the attempt to create it.

...

Common knowledge is often much easier to build in small groups - in the example about getting off the bus, the two need only to look at each other, share a nod, and common knowledge is achieved. Building common knowledge between hundreds or thousands of people is significantly harder, and the fact that religion has such a significant ability to do so is why it has historically had so much connection to politics.
postrat  simler  essay  insight  community  religion  theos  speaking  impro  morality  info-dynamics  commentary  ratty  yvain  ssc  obama  race  hanson  tribalism  network-structure  peace-violence  cohesion  gnosis-logos  multi  todo  enlightenment-renaissance-restoration-reformation  sex  sexuality  coordination  cooperate-defect  lesswrong  ritual  free-riding  GT-101  equilibrium  civil-liberty  exit-voice  game-theory  nuclear  deterrence  arms  military  defense  money  monetary-fiscal  government  drugs  crime  sports  public-goodish  leviathan  explanation  incentives  interests  gray-econ  media  trust  revolution  signaling  tradition  power  internet  social  facebook  academia  publishing  communication  business  startups  cost-benefit  iteration-recursion  social-norms  reinforcement  alignment 
september 2017 by nhaliday
National hiring experiments reveal 2:1 faculty preference for women on STEM tenure track
Here we report five hiring experiments in which faculty evaluated hypothetical female and male applicants, using systematically varied profiles disguising identical scholarship, for assistant professorships in biology, engineering, economics, and psychology. Contrary to prevailing assumptions, men and women faculty members from all four fields preferred female applicants 2:1 over identically qualified males with matching lifestyles (single, married, divorced), with the exception of male economists, who showed no gender preference. Comparing different lifestyles revealed that women preferred divorced mothers to married fathers and that men preferred mothers who took parental leaves to mothers who did not.

Double-blind review favours increased representation of female authors: http://www.sciencedirect.com/science/article/pii/S0169534707002704
Double-blind peer review, in which neither author nor reviewer identity are revealed, is rarely practised in ecology or evolution journals. However, in 2001, double-blind review was introduced by the journal Behavioral Ecology. Following this policy change, there was a significant increase in female first-authored papers, a pattern not observed in a very similar journal that provides reviewers with author information. No negative effects could be identified, suggesting that double-blind review should be considered by other journals.

Teaching accreditation exams reveal grading biases favor women in male-dominated disciplines in France: http://science.sciencemag.org/content/353/6298/474
This bias turns from 3 to 5 percentile ranks for men in literature and foreign languages to about 10 percentile ranks for women in math, physics, or philosophy.
study  org:nat  science  meta:science  gender  discrimination  career  progression  planning  long-term  values  academia  field-study  null-result  effect-size  🎓  multi  publishing  intervention  biases 
july 2017 by nhaliday
Bekker numbering - Wikipedia
Bekker numbering or Bekker pagination is the standard form of citation to the works of Aristotle. It is based on the page numbers used in the Prussian Academy of Sciences edition of the complete works of Aristotle and takes its name from the editor of that edition, the classical philologist August Immanuel Bekker (1785-1871); because the Academy was located in Berlin, the system is occasionally referred to by the alternative name Berlin numbering or Berlin pagination.[1]

Bekker numbers take the format of up to four digits, a letter for column 'a' or 'b', then the line number. For example, the beginning of Aristotle's Nicomachean Ethics is 1094a1, which corresponds to page 1094 of Bekker's edition of the Greek text of Aristotle's works, first column, line 1.[2]
history  iron-age  mediterranean  the-classics  literature  jargon  early-modern  publishing  canon  wiki  reference  protocol-metadata 
july 2017 by nhaliday
To err is human; so is the failure to admit it
Lowering the cost of admitting error could help defuse these crises. A new issue of Econ Journal Watch, an online journal, includes a symposium in which prominent economic thinkers are asked to provide their “most regretted statements”. Held regularly, such exercises might take the shame out of changing your mind. Yet the symposium also shows how hard it is for scholars to grapple with intellectual regret. Some contributions are candid; Tyler Cowen’s analysis of how and why he underestimated the risk of financial crisis in 2007 is enlightening. But some disappoint, picking out regrets that cast the writer in a flattering light or using the opportunity to shift blame.
news  org:rec  org:anglo  org:biz  economics  error  wire-guided  priors-posteriors  publishing  econotariat  marginal-rev  cycles  journos-pundits  responsibility  failure 
june 2017 by nhaliday
List of Chinese inventions - Wikipedia
China has been the source of many innovations, scientific discoveries and inventions.[1] This includes the Four Great Inventions: papermaking, the compass, gunpowder, and printing (both woodblock and movable type). The list below contains these and other inventions in China attested by archaeology or history.
china  asia  sinosphere  technology  innovation  discovery  list  top-n  wiki  reference  article  history  iron-age  medieval  arms  summary  frontier  agriculture  dirty-hands  civilization  the-trenches  electromag  communication  writing  publishing  archaeology  navigation 
june 2017 by nhaliday
Genome-Wide Association Study Reveals Multiple Loci Influencing Normal Human Facial Morphology
http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0099009
https://twitter.com/dgmacarthur/status/904908988516585472
https://twitter.com/piper_jason/status/905128320869662720
http://www.biorxiv.org/content/early/2017/09/07/185330
https://www.technologyreview.com/s/608813/does-your-genome-predict-your-face-not-quite-yet/

http://journals.plos.org/plosbiology/article?id=10.1371/journal.pbio.1000451
Domestic dogs exhibit tremendous phenotypic diversity, including a greater variation in body size than any other terrestrial mammal. Here, we generate a high density map of canine genetic variation by genotyping 915 dogs from 80 domestic dog breeds, 83 wild canids, and 10 outbred African shelter dogs across 60,968 single-nucleotide polymorphisms (SNPs). Coupling this genomic resource with external measurements from breed standards and individuals as well as skeletal measurements from museum specimens, we identify 51 regions of the dog genome associated with phenotypic variation among breeds in 57 traits. The complex traits include average breed body size and external body dimensions and cranial, dental, and long bone shape and size with and without allometric scaling. In contrast to the results from association mapping of quantitative traits in humans and domesticated plants, we find that across dog breeds, a small number of quantitative trait loci (≤3) explain the majority of phenotypic variation for most of the traits we studied. In addition, many genomic regions show signatures of recent selection, with most of the highly differentiated regions being associated with breed-defining traits such as body size, coat characteristics, and ear floppiness. Our results demonstrate the efficacy of mapping multiple traits in the domestic dog using a database of genotyped individuals and highlight the important role human-directed selection has played in altering the genetic architecture of key traits in this important species.
study  biodet  sapiens  embodied  GWAS  genetics  multi  regularizer  QTL  sex  developmental  genetic-load  evopsych  null-result  nature  model-organism  genomics  twitter  social  scitariat  discussion  publishing  realness  drama  preprint  debate  critique  news  org:mag  org:sci  org:biz 
april 2017 by nhaliday
[0809.5250] The decline in the concentration of citations, 1900-2007
These measures are used for four broad disciplines: natural sciences and engineering, medical fields, social sciences, and the humanities. All these measures converge and show that, contrary to what was reported by Evans, the dispersion of citations is actually increasing.

- natural sciences around 60-70% cited in 2-5 year window
- humanities stands out w/ 10-20% cited (maybe because of focus on books)
study  preprint  science  meta:science  distribution  network-structure  len:short  publishing  density  🔬  info-dynamics  org:mat 
february 2017 by nhaliday
probability - Why does a 95% Confidence Interval (CI) not imply a 95% chance of containing the mean? - Cross Validated
The confidence interval is the answer to the request: "Give me an interval that will bracket the true value of the parameter in 100p% of the instances of an experiment that is repeated a large number of times." The credible interval is an answer to the request: "Give me an interval that brackets the true value with probability pp given the particular sample I've actually observed." To be able to answer the latter request, we must first adopt either (a) a new concept of the data generating process or (b) a different concept of the definition of probability itself.

http://stats.stackexchange.com/questions/139290/a-psychology-journal-banned-p-values-and-confidence-intervals-is-it-indeed-wise

PS. Note that my question is not about the ban itself; it is about the suggested approach. I am not asking about frequentist vs. Bayesian inference either. The Editorial is pretty negative about Bayesian methods too; so it is essentially about using statistics vs. not using statistics at all.

wut

http://stats.stackexchange.com/questions/6966/why-continue-to-teach-and-use-hypothesis-testing-when-confidence-intervals-are
http://stats.stackexchange.com/questions/2356/are-there-any-examples-where-bayesian-credible-intervals-are-obviously-inferior
http://stats.stackexchange.com/questions/2272/whats-the-difference-between-a-confidence-interval-and-a-credible-interval
http://stats.stackexchange.com/questions/6652/what-precisely-is-a-confidence-interval
http://stats.stackexchange.com/questions/1164/why-havent-robust-and-resistant-statistics-replaced-classical-techniques/
http://stats.stackexchange.com/questions/16312/what-is-the-difference-between-confidence-intervals-and-hypothesis-testing
http://stats.stackexchange.com/questions/31679/what-is-the-connection-between-credible-regions-and-bayesian-hypothesis-tests
http://stats.stackexchange.com/questions/11609/clarification-on-interpreting-confidence-intervals
http://stats.stackexchange.com/questions/16493/difference-between-confidence-intervals-and-prediction-intervals
q-n-a  overflow  nibble  stats  data-science  science  methodology  concept  confidence  conceptual-vocab  confusion  explanation  thinking  hypothesis-testing  jargon  multi  meta:science  best-practices  error  discussion  bayesian  frequentist  hmm  publishing  intricacy  wut  comparison  motivation  clarity  examples  robust  metabuch  🔬  info-dynamics  reference  grokkability-clarity 
february 2017 by nhaliday
Paperscape
- includes physics, cs, etc.
- CS is _a lot_ smaller, or at least has much lower citation counts
- size = number citations, placement = citation network structure
papers  publishing  science  meta:science  data  visualization  network-structure  big-picture  dynamic  exploratory  🎓  physics  cs  math  hi-order-bits  survey  visual-understanding  preprint  aggregator  database  search  maps  zooming  metameta  scholar-pack  🔬  info-dynamics  scale  let-me-see  chart 
february 2017 by nhaliday
Medical Hypotheses - Wikipedia
Medical Hypotheses is a medical journal published by Elsevier. It was originally intended as a forum for unconventional ideas without the traditional filter of scientific peer review, "as long as (the ideas) are coherent and clearly expressed" in order to "foster the diversity and debate upon which the scientific process thrives."

they published AIDS denialism at one point
science  medicine  publishing  wiki  history  organization  drama 
january 2017 by nhaliday
CSRankings: Computer Science Rankings (beta)
some missing venues: ITCS, QCRYPT, QIP, COLT (last has some big impact on the margins)
data  higher-ed  grad-school  phd  cs  tcs  list  schools  🎓  top-n  database  conference  ranking  publishing  fall-2016  network-structure  academia  objective-measure  let-me-see  nibble  reference 
july 2016 by nhaliday

bundles : academe

related tags

aaronson  abstraction  academia  accretion  acm  acmtariat  additive  advice  aggregator  agriculture  ai  ai-control  algorithms  alignment  alt-inst  analogy  analysis  analytical-holistic  anglo  anglosphere  announcement  archaeology  aristos  arms  article  asia  authoritarianism  automation  bayesian  being-becoming  best-practices  biases  big-peeps  big-picture  big-yud  biodet  books  borjas  bostrom  brands  britain  business  business-models  canon  career  censorship  chart  cheatsheet  checklists  china  civil-liberty  civilization  clarity  class  classic  clever-rats  cocktail  cog-psych  cohesion  commentary  communication  communism  community  comparison  competition  complex-systems  complexity  composition-decomposition  concentration-of-measure  concept  conceptual-vocab  conference  confidence  confusion  contrarianism  convexity-curvature  cooperate-defect  coordination  correlation  cost-benefit  coupling-cohesion  crime  critique  crosstab  crux  crypto  cs  cycles  data  data-science  database  death  debate  decentralized  decision-theory  deep-materialism  defense  degrees-of-freedom  density  dependence-independence  desktop  detail-architecture  deterrence  developmental  dirty-hands  discovery  discrimination  discussion  distribution  documentation  drama  drugs  duplication  dynamic  early-modern  economics  econotariat  eden  eden-heaven  EEA  effect-size  efficiency  egalitarianism-hierarchy  EGT  electromag  elegance  elite  embodied  ems  endogenous-exogenous  enlightenment-renaissance-restoration-reformation  epistemic  equilibrium  error  essay  essence-existence  estimate  ethics  events  evidence-based  evolution  evopsych  examples  existence  exit-voice  expectancy  experiment  expert  expert-experience  explanation  exploratory  facebook  failure  fall-2016  farmers-and-foragers  fermi  fiction  field-study  flexibility  flux-stasis  foreign-lang  formal-values  free-riding  french  frequentist  frontier  futurism  game-theory  gbooks  gelman  gender  generalization  genetic-load  genetics  genomics  git  gnosis-logos  gnxp  government  grad-school  graphs  gray-econ  gregory-clark  grokkability  grokkability-clarity  growth-econ  GT-101  guide  GWAS  hanson  hardware  health  heavy-industry  hi-order-bits  higher-ed  history  hmm  hn  howto  humanity  hypothesis-testing  icml  idk  iidness  impetus  impro  incentives  increase-decrease  individualism-collectivism  inequality  info-dynamics  info-foraging  innovation  insight  institutions  intelligence  interests  internet  intervention  intricacy  intuition  iq  iron-age  iteration-recursion  jargon  journos-pundits  labor  language  large-factor  latex  learning  learning-theory  legacy  len:short  lens  lesswrong  let-me-see  letters  leviathan  lexical  libraries  limits  links  list  literature  local-global  long-short-run  long-term  machine-learning  macro  magnitude  malthus  maps  marginal  marginal-rev  math  math.AG  math.CO  mathtariat  media  medicine  medieval  mediterranean  meta:research  meta:rhetoric  meta:science  metabuch  metameta  methodology  metrics  microsoft  military  miri-cfar  model-organism  models  moloch  moments  monetary-fiscal  money  morality  mostly-modern  motivation  mrtz  multi  multiplicative  mutation  nature  navigation  near-far  network-structure  neuro  neuro-nitgrit  news  nibble  nips  nitty-gritty  nonlinearity  novelty  nuclear  null-result  number  nutrition  obama  objective-measure  objektbuch  off-convex  old-anglo  open-closed  optimization  org:anglo  org:biz  org:bleg  org:edu  org:junk  org:mag  org:mat  org:nat  org:rec  org:sci  organization  orwellian  overflow  papadimitriou  papers  parenting  pdf  peace-violence  people  phd  philosophy  physics  piracy  planning  plots  popsci  postrat  power  pre-ww2  prediction  preprint  priors-posteriors  pro-rata  problem-solving  programming  progression  proofs  properties  proposal  protocol-metadata  psychology  psychometrics  public-goodish  publishing  q-n-a  QTL  quantum  quantum-info  quixotic  race  ranking  rant  rationality  ratty  reading  realness  recommendations  reference  reflection  regularizer  reinforcement  religion  replication  research  responsibility  retention  revolution  rhetoric  rigorous-crypto  risk  ritual  robust  russia  sanjeev-arora  sapiens  scale  scholar  scholar-pack  schools  science  scifi-fantasy  scitariat  search  selection  sex  sexuality  shift  signal-noise  signaling  simler  singularity  sinosphere  skunkworks  sleuthin  smoothness  social  social-norms  social-science  soft-question  software  solzhenitsyn  speaking  speculation  speed  speedometer  sports  ssc  stackex  stanford  startups  stats  store  stories  stream  street-fighting  structure  study  studying  subculture  summary  survey  synthesis  talks  tcs  tcstariat  technical-writing  technology  the-classics  the-founding  the-prices  the-trenches  theos  thinking  threat-modeling  time  time-preference  tip-of-tongue  todo  tools  top-n  track-record  tradition  trends  tribalism  trivia  trust  twitter  uncertainty  unit  universalism-particularism  urban-rural  usa  values  visual-understanding  visualization  war  wealth  webapp  west-hunter  wiki  winner-take-all  wire-guided  wisdom  workflow  writing  wut  yak-shaving  yvain  zeitgeist  zero-positive-sum  zooming  🎓  🔬 

Copy this bookmark:



description:


tags: