metadata   32797

« earlier    

ClaimReview schema
https://www.poynter.org/news/website-helps-you-find-related-fact-checks-and-it-was-built-17-year-old

Instead of pool days and part-time jobs, Sreya Guha spends her summers with lines and lines of code.

A senior at the Castilleja high school in Palo Alto, California, Guha has spent the past two summers creating software. Her most recent project, Related Fact Checks, lets internet users paste article links and search to see if that topic has been already debunked by a fact-checking organization.

The platform isn’t your typical class project — it’s one of the best uses of existing technology to combat online misinformation, several fact-checking experts told Poynter.
metadata  scheme  compciv 
4 days ago by danwin
for real? they're gonna get him on "track changes"?
Mueller court filing includes Microsoft Word documents showing edits said to have been made by Manafort --->
https://twitter.com/davidjoachim/status/939261123911716872
metadata  compciv 
4 days ago by danwin
best way to build a meta query using sql in wordpress
Combine the rows using a self-join, and you're good to go:

SELECT *
FROM yourtable name
INNER JOIN yourtable season
on season.post_id = name.post_id
and season.meta_key = 'season'
INNER JOIN yourtable episode
on episode.post_id = name.post_id
and episode.meta_key = 'episode'
WHERE name.meta_key = 'name'
and name.meta_value = 'Smallville'
ORDER BY season.meta_value, episode.meta_value
wordpress  metadata  post_meta  query  sql  good 
4 days ago by texorama
Don't reinvent the wheel! Use metadata systems that work! – Customer Feedback for TagSpaces
Your application has great potential. But...

Do not use the file name to store metadata! File names should not contain any user generated information. Storing metadata has numerous problems: security, privacy, non-portability, just to name a few.

If you are going to support every kind of file, then you should use what is already there. Audio, video, image and document files already contain metadata. You have to read that metadata and import it to your application. Don't force the user to reenter all of it.

Besides format specific metadata, there's also multiformat metadata. It's called XMP. It's even an international standard.
TagSpaces  selfhosted  tags  metadata  XMP 
6 days ago by coffeebucket
BEACON link dump format
BEACON is a data interchange format for large numbers of uniform links. A BEACON link dump consists of

a set of links (Section 2.1) and
a set of meta fields (Section 4).
linking  metadata  standard  rdf  linkeddata 
10 days ago by rybesh
Twitter
Jon Dunn presenting now at on a new project An Audiovisual Platform to Support Mas…
Metadata  AMIA17  from twitter_favs
11 days ago by verwinv
File Information Tool Set (FITS)
FITS identifies, validates and extracts technical metadata for a wide range of file formats.
DigitalPreservation  Metadata  Tools 
15 days ago by gmcmahon
Algos know more about us than we do about ourselves
NOVEMBER 24, 2017 | Financial Time | John Dizard.

When intelligence collectors and analysts take an interest in you, they usually start not by monitoring the content of your calls or messages, but by looking at the patterns of your communications. Who are you calling, how often and in what sequence? What topics do you comment on in social media?

This is called traffic analysis, and it can give a pretty good notion of what you and the people you know are thinking and what you are preparing to do. Traffic analysis started as a military intelligence methodology, and became systematic around the first world war. Without even knowing the content of encrypted messages, traffic analysts could map out an enemy “order of battle” or disposition of forces, and make inferences about commanders’ intentions.

Traffic analysis techniques can also cut through the petabytes of redundant babble and chatter in the financial and political worlds. Even with state secrecy and the forests of non-disclosure agreements around “proprietary” investment or trading algorithms, crowds can be remarkably revealing in their open-source posts on social media.

Predata, a three-year-old New York and Washington-based predictive data analytics provider, has a Princeton-intensive crew of engineers and international affairs graduates working on early “signals” of market and political events. Predata trawls the open metadata for users of Twitter, Wikipedia, YouTube, Reddit and other social media, and analyses it to find indicators of future price moves or official actions.

I have been following their signals for a while and find them to be useful indicators. Predata started by creating political risk indicators, such as Iran-Saudi antagonism, Italian or Chilean labour unrest, or the relative enthusiasm for French political parties. Since the beginning of this year, they have been developing signals for financial and commodities markets.

The 1-9-90 rule
1 per cent of internet users initiate discussions or content, 9 per cent transmit content or participate occasionally and 90 per cent are consumers or ‘lurkers’

Using the example of the company’s BoJ signal. For this, Predata collects the metadata from 300 sources, such as Twitter users, contested Wikipedia edits or YouTube items created by Japanese monetary policy geeks. Of those, at any time perhaps 100 are important, and 8 to 10 turn out to be predictive....This is where you need some domain knowledge. It turns out that Twitter is pretty important for monetary policy, along with the Japanese-language Wiki page for the Bank of Japan, or, say, a YouTube video of [BoJ governor] Haruhiko Kuroda’s cross-examination before a Diet parliamentary committee.

“Then you build a network of candidate discussions and look for the pattern those took before historical moves. The machine-learning algorithm goes back and picks the leads and lags between traffic and monetary policy events.”

Typically, Predata’s algos seem to be able to signal changes in policy or big price moves somewhere between 2 days and 2 weeks in advance. Unlike some academic Twitter scholars, Predata does not do systematic sentiment analysis of tweets or Wikipedia edits. “We only look for how many people there are in the conversation and comments, and how many people disagreed with each other. We call the latter the coefficient of contestation,” Mr Shinn says.

The lead time for Twitter, Wiki or other social media signals varies from one market to another. Foreign exchange markets typically move within days, bond yields within a few days to a week, and commodities prices within a week to two weeks. “If nothing happens within 30 days,” says Mr Lee, “then we say we are wrong.”
signals  algorithms  massive_data_sets  Predata  metadata  political_risk  financial_markets  commodities 
15 days ago by jerryking

« earlier    

related tags

2015  2017b  351  853  access  acquisition  ala  algorithms  amia17  apache  api  archive  archives  artwork  bagit  best-practice  bestpractices  bias  bibframe  bigdata  blockchain  books  build  catalog  cataloging  chapters  cheatsheet  classification  cli  cloud  collection  collections  commodities  compciv  computation  conversion  copyright  courses  crossref  cultural_institutions  culturalheritage  cybersecurity  data-curation  data  dataset  datasets  datavisualization  dc  development  diginit  digital_libraries  digital_preservation  digitalcuration  digitalpreservation  digitization  dj  dmca  doi  dublin_core  duckstamp  dvs  editor  edm  education  email  encoder  europeana  event  example  exampleused  export  facebook  files  financial_markets  forecast  geo  geography  gist  good  google  googlebooks  growth  hathitrust  hdf5  head  header  history  html  identity  image  immersion  import  info281  ingest  install  internet  internetarchive  itcanonlygetbetter  java  kalevhannes  library  library_of_congress  lightroom  linked_data  linkeddata  linkeddata_utils  linking  loc  maintenance  make  mapping  marc  marcoarment  massive_data_sets  media  meta  metatags  mining  mit  mods  museums  music  naming  nevra  nov  nsa  objectclass  objects  odata  olingo  ontologies  openrefine  overcast  packaging  packrati.us  people  personal-data-stores  photography  playlist  podcast  podcasting  political_risk  post_meta  predata  premis  privacy  processing  projectmanagement  publishing  quality  query  rdf  recordkeeping  reference  regex  research-data  research  researchers  rpm  scan  schema.org  schema  schemaorg  scheme  scholarlycommunication  search  security  selfhosted  seo  signals  socialnetworking  sql  standard  standards  streaming  styleguide  surveillance  tagging  tags  tagspaces  text  tgm  thesaurus_for_graphic_materials  tonews  tool  tools  trump  twitter  version  video  vocabularies  web  webarchiving  webdesign  webdev  webdevelopment  wombat  wordpress  work  workflows  xml  xmp  youtube  yum 

Copy this bookmark:



description:


tags: