scaling   16263

« earlier    

Novartis’s new chief sets sights on ‘productivity revolution’
SEPTEMBER 25, 2017 | Financial Times | Sarah Neville and Ralph Atkins.

The incoming chief executive of Novartis, Vas Narasimhan, has vowed to slash drug development costs, eyeing savings of up to 25 per cent on multibillion-dollar clinical trials as part of a “productivity revolution” at the Swiss drugmaker.

The time and cost of taking a medicine from discovery to market has long been seen as the biggest drag on the pharmaceutical industry’s performance, with the process typically taking up to 14 years and costing at least $2.5bn.

In his first interview as CEO-designate, Dr Narasimhan says analysts have estimated between 10 and 25 per cent could be cut from the cost of trials if digital technology were used to carry them out more efficiently. The company has 200 drug development projects under way and is running 500 trials, so “that will have a big effect if we can do it at scale”.......Dr Narasimhan plans to partner with, or acquire, artificial intelligence and data analytics companies, to supplement Novartis’s strong but “scattered” data science capability.....“I really think of our future as a medicines and data science company, centred on innovation and access.”

He must now decide where Novartis has the capability “to really create unique value . . . and where is the adjacency too far?”.....Does he need the cash pile that would be generated by selling off these parts of the business to realise his big data vision? He says: “Right now, on data science, I feel like it’s much more about building a culture and a talent base . . . ...Novartis has “a huge database of prior clinical trials and we know exactly where we have been successful in terms of centres around the world recruiting certain types of patients, and we’re able to now use advanced analytics to help us better predict where to go . . . to find specific types of patients.

“We’re finding that we’re able to significantly reduce the amount of time that it takes to execute a clinical trial and that’s huge . . . You could take huge cost out.”...Dr Narasimhan cites one inspiration as a visit to Disney World with his young children where he saw how efficiently people were moved around the park, constantly monitored by “an army of [Massachusetts Institute of Technology-]trained data scientists”.
He has now harnessed similar technology to overhaul the way Novartis conducts its global drug trials. His clinical operations teams no longer rely on Excel spreadsheets and PowerPoint slides, but instead “bring up a screen that has a predictive algorithm that in real time is recalculating what is the likelihood our trials enrol, what is the quality of our clinical trials”.

“For our industry I think this is pretty far ahead,” he adds.

More broadly, he is realistic about the likely attrition rate. “We will fail at many of these experiments, but if we hit on a couple of big ones that are transformative, I think you can see a step change in productivity.”

.
Novartis  pharmaceutical_industry  CEOs  productivity  scaling  product_development  data_scientists  artificial_intelligence  analytics  data_driven  attrition_rates  failure  Indian-Americans  predictive_analytics  spreadsheets 
yesterday by jerryking
Solving issues Scaling Remote Desktop on High DPI screens (Surface Pro) | Cameron Dwyer | Office 365, SharePoint, Outlook, OnePlace Solutions
The high density (DPI or Dots Per Inch) of modern screens such as a Surface Pro can cause numerous issues when trying to use Remote Desktop Connection (RDC) to remotely connect to another machine. Add an external monitor to the mix and you’ll be pulling your hair out before long at all. These are the…
mstsc  rdc  microsoft  scaling  dpi  hdpi  remote_desktop 
2 days ago by vonc
Scaling the GitLab database | GitLab
pgpool was the first solution we looked into, mostly because it seemed quite attractive based on all the features it offered. Some of the data from our tests can be found in this comment.

Ultimately we decided against using pgpool based on a number of factors. For example, pgpool does not support sticky connections. This is problematic when performing a write and (trying to) display the results right away. Imagine creating an issue and being redirected to the page, only to run into an HTTP 404 error because the server used for any read-only queries did not yet have the data. One way to work around this would be to use synchronous replication, but this brings many other problems to the table; problems we prefer to avoid.

Another problem is that pgpool's load balancing logic is decoupled from your application and operates by parsing SQL queries and sending them to the right server. Because this happens outside of your application you have very little control over which query runs where. This may actually be beneficial to some because you don't need additional application logic, but it also prevents you from adjusting the routing logic if necessary.

Configuring pgpool also proved quite difficult due to the sheer number of configuration options. Perhaps the final nail in the coffin was the feedback we got on pgpool from those having used it in the past. The feedback we received regarding pgpool was usually negative, though not very detailed in most cases. While most of the complaints appeared to be related to earlier versions of pgpool it still made us doubt if using it was the right choice.

The feedback combined with the issues described above ultimately led to us deciding against using pgpool and using pgbouncer instead. We performed a similar set of tests with pgbouncer and were very satisfied with it. It's fairly easy to configure (and doesn't have that much that needs configuring in the first place), relatively easy to ship, focuses only on connection pooling (and does it really well), and had very little (if any) noticeable overhead. Perhaps my only complaint would be that the pgbouncer website can be a little bit hard to navigate.

Using pgbouncer we were able to drop the number of active PostgreSQL connections from a few hundred to only 10-20 by using transaction pooling. We opted for using transaction pooling since Rails database connections are persistent. I
pgpool  database  performance  postgres  architecture  gitlab  scaling  via:hellsten  postgreSQL 
4 days ago by s7udi0
Scaling Postgres with Read Replicas and Using WAL to Counter Stale Reads | Hacker News
Instead of having an "observer" process that updates a table with the LSN you can just ask Postgres directly: The `pg_stat_replication` view has the last LSN replayed on each replica available: https://www.postgresql.org/docs/10/static/monitoring-stats.h...
Also, instead of updating the `users` table with the LSN of the commit - which creates extra write load - why not store it in the session cookie, then you can route based on that.
Another option is to enable synchronous replication for transactions that need to be visible on all replicas: https://www.postgresql.org/docs/10/static/warm-standby.html#...
Since this can be enabled/disabled for each transaction it's really powerful.
hn  postgresql  postgres  scaling  high-availability 
6 days ago by hellsten
Monitoring approach for Streaming Replication with Hot Standby in PostgreSQL 9.3. | EnterpriseDB
Calculating lags in Seconds. The following is SQL, which most people uses to find the lag in seconds:

SELECT CASE WHEN pg_last_xlog_receive_location() = pg_last_xlog_replay_location()
THEN 0
ELSE EXTRACT (EPOCH FROM now() - pg_last_xact_replay_timestamp())
END AS log_delay;
Including the above into your repertoire can give you good monitoring for PostgreSQL.

I will in a future post include the script that can be used for monitoring the Hot Standby with PostgreSQL streaming replication.
posgres  postgresql  scaling  architecture  high-availability 
6 days ago by hellsten
Scaling Postgres with Read Replicas & Using WAL to Counter Stale Reads — Brandur Leach
- Modern databases operating over low latency connections can keep replicas trailing their primary very closely, and probably spend most of their time less than a second out of date. Even systems using read replicas without any techniques for mitigating stale reads will produce correct results most of the time.

- Let’s take a look at a technique to make sure that stale reads never occur. We’ll use Postgres’s own understanding of its replication state and some in-application intelligence around connection management to accomplish it.

1. Postgres commits all changes to a WAL (write-ahead log) for durability reasons.
2. Changes are written to the WAL one entry at a time and each one is assigned a LSN (log sequence number).
3. Changes are batched in 16 MB WAL segments.
4. A Postgres database can dump a representation of its current state to a base backup which can be used to initialize replica.
5. From there, the replica stays in lockstep with its primary by consuming changes in its emitted WAL.
6. A base backup comes with a pointer to the current LSN so that when a replica starts to consume the WAL, it knows where to start.

- There are a few ways for a replica to consume WAL.

1. The first is “log shipping”: completed WAL segments (16 MB chunks of the WAL) are copied from primary to replicas and consumed as a single batch. => secondaries will be at least as behind as the current segment that’s still being written.

2. Another common configuration for consuming WAL is “streaming”, where WAL is emitted by the primary to replicas over an open connection. This has the advantage of secondaries being very current at the cost of some extra resource consumption.


- replicas consuming WAL with log shipping are also known as “warm standbys”
- while those using streaming are called “hot standbys”.

- By routing read operations only to replicas that are caught up enough to run them accurately, we can eliminate stale reads. This necessitates an easy way of measuring how far behind a replica is, and the WAL’s LSN is perfect for this use.

- For any action that will later affect reads, we touch the user’s min_lsn by setting it to the value of the primary’s pg_current_wal_lsn().
postgres  postgresql  scaling  high-availability  architecture  standby 
6 days ago by hellsten

« earlier    

related tags

advice  agile  airbnb  alan_kay  algolia  algorithm  amazon  amdahls-law  analogy  analytics  apache-arrow  app  architecture  art  article  artificial_intelligence  attrition_rates  automation  autoscale  avoid  aws  begin  benediktevans  bioinformatics  bitcoin  blockchain  blog  blogpost  bob+sutton  book  books  building_a_business  business  cache  caching  cap  career  ceos  certificate  change  cities  civictech  cloud  cluster  coding  collaboration  communication  complex  computing  content-generation  content  contention  control  coworkers  css  culture  customers  customerservice  dask  data  data_driven  data_scientists  dataarchitecture  database  databases  dataframe  db  delegation  design  devops  discussion  django  docker  dpi  ec2  ecosystem  edgecase  elb  emotion  engineering  entrepreneurship  envoy  ethereum  example  examples  exec  failure  family  feel  flat  flow  founders  gitlab  group  grow  growth  guideline  hdpi  heroku  hhrr  hickory  high-availability  hiring  history  hn  how  hr  http  https  humanize  ifttt  image-processing  image  images  implementation  indian-americans  infrastructure  innovation  instance  interesting  interop  j-pal  java  job  jobs  jpg  jvm  karthik  kills  knightfoundation  knowhow  kubernetes  large  largeprojects  latency  law  leadership  learning  lessons  letsencrypt  level  linkedin  littles-law  loadbalancing  locking  locks  machine  management  manual  marketing  mastodon  mature  maturity  medinformatics  microservices  microsoft  modelling  mstsc  navy  networking  nginx  node.js  nosql  novartis  observability  operations  ops  organization  ou2.0  out-of-core  paas  pandas  partition  performance  pgpool  pharmaceutical_industry  phone  pipeline  pirates  pocket  posgres  postgres  postgresql  predictive_analytics  premature  presto  price  product_development  productivity  productmanagement  programming  proxy  python  rails  rather-interesting  rct  rdc  react.js  react  reddit  redux  reid+hoffman  remote  remote_desktop  replication  research  resilience  resolution  responsive  ruby  rwd  saas  sales  scalability  scale  scalingup  schedulers  script  security  segwit  sharding  share  sharing  shiny  site  sites  sklearn  social  socialscience  software-development-is-not-programming  spark  spot-fleet  spot-instances  spot  spreadsheets  sql  ssl  ssr  standby  startup  startups  storage  strategy  svg  system-dynamics  systems  talent  talks  team  teams  tenant  theory  threads  tips  tls  to-write-about  to  tool  trust  twitter  uber  user-experience  usl  webdev  websocket  woocommerce  wordpress  work  working   

Copy this bookmark:



description:


tags: