The Best Scientific Images Of 2013

It is a slow Saturday with virtually no financial, economic or any other news, so what better way to spend it than looking at the coolest non-finance related images of the past year. Without further ado, here they are, courtesy of Wired: the best scientific visualizations of 2013.

* * *

1. The Mathematics of Familiar Strangers

We live in an image-dominated age, and popular science abounds with visuals: eye-popping photographs, gorgeous graphics and slick information design. Amidst all this eye candy, not much attention is paid to figures accompanying articles in scientific journals and white papers.

Even if they’re utilitarian and low-resolution, though — or perhaps because of that — these figures are a sort of scientific folk art. They convey complex findings or principles with simplicity and grace, and sometimes even beauty.

On the following pages are Wired Science’s favorite research graphics of 2013. They’re in no particular order, except that the first are particular favorites. Based on a population-wide analysis of bus ridership in Singapore, they depict a little-appreciated type of social network: that of “familiar strangers,” or the people we encounter while going about our everyday routines.

Above is the encounter network of a single bus and its 214 regular passengers. Below and at left is a single individual’s “encounter network” over the course of a week; to the right are the formal chances of bumping into a familiar stranger a given time. Even at a glimpse, the figures quantify a truth intuited by commuters: beneath urban life’s chaotic, seemingly random surface lies pattern and order.

Citation: “Understanding metropolitan patterns of daily encounters.” By Lijun Sun, Kay W. Axhausen, Der-Horng Lee, Xianfeng Huang. Proceedings of the National Academy of Sciences, Vol. 110 No. 34, August 20, 2013.



2. An Unexpected Engine of Evolution

It’s often thought that evolution is fueled by competition, with red-in-tooth-and-claw dynamics generating new, better-adapted forms and species. But sometimes — perhaps frequently — new species just happen.

Above and at right is a map of greenish warbler distribution, color-coded according to local genetic signatures, around the Tibetan plateau. The warblers are what’s known as a ring species, occupying a horseshoe-shaped range; as neighboring populations intermingle, genes flow around the horseshoe, but populations at its tips no longer interbreed and eventually become different species.

At left is a computational model of this process. According to the model, no adaptations or differences in reproductive fitness are necessary to produce new species. Rather, they seem to arise as a function of time and space; evolution itself is a generative, diversifying force.

Citation: “Evolution and stability of ring species.” By Ayana B. Martins, Marcus A. M. de Aguiar and Yaneer Bar-Yam. Proceedings of the National Academy of Sciences, March 11, 2013.



3. A Fossil Insect’s Forest Tale

At first glance, this computer re-creation of a 110 million-year-old fossil lacewing larvae might seem like eye candy. But what makes it special is the information it provides — not just about the insect’s anatomy and the evolutionary history of its family, but the Early Cretaceous forests in which it lived. In modern lacewings, those frond-like shell structures catch small, fine hairs that grow on the surface of ferns, creating a fern-like camouflage coat. The fossil lacewing, surmise researchers, lived in forests burned regularly by wildfires, opening habitat in which ferns could grow.

Citation: “Early evolution and ecology of camouflage in insects.” By Ricardo Pérez-de la Fuente, Xavier Delclòs, Enrique Peñalver, Mariela Speranza, Jacek Wierzchos, Carmen Ascaso, and Michael S. Engel. Proceedings of the National Academy of Sciences, Vol. 109 No. 52, December 26, 2012.



4. Alan Turing’s Fingers

Nearly six decades after Alan Turing’s death, the British mathematician is still celebrated as a Nazi code-breaking World War II hero and father of modern computer science. His most enduring legacy, though, may be in biology: Late in his life, Turing theorized that a particular type of chemical interaction could account for many patterns observed in nature. In subsequent decades, scientists would find these Turing patterns in everything from cheetah spots to organ formation. In the image above, Turing patterns can be seen in the development of mouse fingers, just as they’re seen in fish fin development — suggesting, say researchers, that some Turing-type mechanism is an ancestral feature of vertebrate evolution.

Citation: “Hox Genes Regulate Digit Patterning by Controlling the Wavelength of a Turing-Type Mechanism.” By Rushikesh Sheth, Luciano Marcon, M. Félix Bastida, Marisa Junco, Laura Quintana, Randall Dahn, Marie Kmita, James Sharpe, Maria A. Ros. Science, Vol. 338 No. 6113, 14 December 2012.

 


5. The Sleep-Deprived Genome 

If you miss a night’s sleep, you feel like a zombie — a phenomenon described at the genomic level in this comparison of gene expression in well-rested and sleep-deprived people. The two groups differ, not only in genes linked to sleep and circadian rhythms, but also to immune function cell, repair and stress response.

 


6. Mental CLARITY

A new technique for dissolving fatty molecules in biological tissue can be used to render organs transparent (below). Known, appropriately, as CLARITY, the technique’s power becomes evident when combined with fluorescent tags that affix to particular cell types. The result: translucent, color-coded brains, such as the mouse brain above, that could give researchers a literal window into neurological function and anatomy.

Citation: “Structural and molecular interrogation of intact biological systems.” By Kwanghun Chung, Jenelle Wallace, Sung-Yon Kim, Sandhiya Kalyanasundaram, Aaron S. Andalman, Thomas J. Davidson, Julie J. Mirzabekov, Kelly A. Zalocusky, Joanna Mattis, Aleksandra K. Denisin, Sally Pak, Hannah Bernstein, Charu Ramakrishnan, Logan Grosenick, Viviana Gradinaru & Karl Deisseroth. Nature, online publication 10 April 2013.

 


7. How Much Is a Forest Worth?

Jungle cleared late in the 19th century to build the Panama Canal grew back quickly; by 2000, when the United States gave control of the canal to Panama, the forests had largely recovered. Soon, however, they were threatened by commercial and residential development. This is problematic for many reasons: not only is the juncture of North and South America a biodiversity hotspot, but canal operations rely on dry-season water flows impacted by changes in forest cover.

Of course, when weighed against short-term profit, such well-meaning but fuzzy-sounding environmental arguments often lose. Enter ecosystem services, which quantifies nature’s bottom-line financial worth to humans. For the map above, researchers calculated the annual value of sustainably managed Panamanian forests. They’re worth far more as water-gathering, carbon-sequestering timber than as parking lots.

Citation: Bundling ecosystem services in the Panama Canal watershed.” By Silvio Simonit and Charles Perrings. Proceedings of the National Academy of Sciences, Vol. 110 No. 23, 4 June 2013.

 


8. Parasitic Complexity

For decades, parasites were viewed primarily as pests: something to ignore, perhaps with a sniff of disgust, unless they harmed humans, in which case they were enemies. In recent years, though, scientists have come to appreciate the nuanced, often important roles played by parasites in animal life.

Much of that appreciation involves the relationship between parasites and immune system function, but there’s an ecological angle, too. Witness this computer-modeled food web: When parasites are included in its parameters, it’s revealed as a far more complex system than it appeared without them.

Citation: “Parasites Affect Food Web Structure Primarily through Increased Diversity and Complexity.” By Jennifer A. Dunne, Kevin D. Lafferty, Andrew P. Dobson, Ryan F. Hechinger, Armand M. Kuris, Neo D. Martinez, John P. McLaughlin, Kim N. Mouritsen, Robert Poulin, Karsten Reise, Daniel B. Stouffer, David W. Thieltges, Richard J. Williams, Claus Dieter Zander. PLoS Biology, Vol. 11 No. 6, 11 June 2013

 


9. A Genome Is Not a Book

Until very recently, genomes were treated as linear strings of genetic information — something that could be read sequentially, DNA molecule by DNA molecule, like lines in a book. Inside our cells, though, our chromosomes are tangled in fabulously complex ways, and the shape of these tangles may be inseparable from their function.

New methods are being now developed to study real-time, real-shape genomes. Above is one such analysis: in a series of cell-nucleus snapshots, it captures gene activity across time and space. Activity proved to be coordinated in far-flung regions of the genome, but in ways that fluctuated over time. Structure itself is a form of information.

Citation: “Micron-scale coherence in interphase chromatin dynamics.” By Alexandra Zidovska, David A. Weitz, and Timothy J. Mitchison. Proceedings of the National Academy of Sciences, online publication 9 September 2013.

 


10. A Lost Underground Kingdom

Soil isn’t just dirt. It’s rich microbial ecosystems integral to the life that grows above. In the Great Plains, these ecosystems have been almost entirely wiped out: as tallgrass prairies were converted to farmland, soil composition changed, too. The microbial relationships that sustained one of Earth’s great biomes were lost to time. Yet a few prairie fragments remain; by taking DNA samples from their soils, researchers reconstructed this vanished underground world.

Citation: “Reconstructing the Microbial Diversity and Function of Pre-Agricultural Tallgrass Prairie Soils in the United States.” By Noah Fierer, Joshua Ladau, Jose C. Clemente, Jonathan W. Leff, Sarah M. Owens, Katherine S. Pollard, Rob Knight, Jack A. Gilbert, Rebecca L. McCulley. Science, Vol. 342 No. 6158, 1 November 2013.

 


11. Lunar Cycles, Life Cycles

In the North American arctic, populations of snowshoe hares, autumnal moths and Canada lynx rise and fall in 9.3 year-long cycles, moving in uncanny tandem with the time it takes for our moon’s orbit to cross the sun’s visual path. This might not be a coincidence. Solar and lunar cycles modulate Earth’s exposure to cosmic rays, which are known to damage plant DNA; this could result in plants concentrating resources on cell repair, thus producing fewer of the indigestive compounds that typically serve as defense against predation.

Every 9.3 years, then, when the sun and moon are positioned just so, Arctic plants are at their most vulnerable; population booms among plant-hungry moth and hare soon follow, and are followed in turn by booms in rabbit-munching lynx. This synchronization of the celestial and ecological is still just a hypothesis, but it’s a lovely one.

Citation: “Linking ‘10-year’ herbivore cycles to the lunisolar oscillation: the cosmic ray hypothesis.” By Vidar Selås. Oikos, published online 12 September 2013.


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/mPiUKuWhfLM/story01.htm Tyler Durden

Government Spent $224,863 On “Custom-Fit” Condoms

Money well-spent, we are sure some would suggest; but when the National Institute of Health spends $224,863 to test 95 “custom-fitted” condoms so every hard-working American man can choose the one that fits ‘just right’, we suggest the government is stretching the tax dollar a little too far. As NY Post reports, the study was prompted by concern that despite the wide-scale promotion of latex condoms to help prevent the spread of HIV, their use remains “disappointingly low,” because, the government says, one-third to one-half of men complain of poor-fitting prophylactics and are less likely to use them… apparently. Of course, we assume, when questioned, all said the condom was ‘too small’.

 

Via NY Post,

 

The NIH blames US “regulatory guidelines” for American men having to choose from a “narrow range of condom sizes.”

 

The six-figure grant was awarded to TheyFit of Covington, Ga., which offers a wide variety of condoms that vary in length — from a bit more than 3 inches to nearly 9 ¹/? — and in width.

 

They’re available in European Union countries, but not in the United States, where they would have to be approved by the Food and Drug Administration.

 

“For most of their existence, condoms were custom fitted,” TheyFit explains on its Web site.

 

“For hundreds of years, until the early part of the 20th century, they were made of linen or animal gut fitted to over individual penis sizes.”

 

But the introduction of latex, mass production of condoms and other factors created what the firm calls “the ‘one size fits all’ condom.”

 

For the man who doesn’t know his own penis size, TheyFit offers a free downloadable “FitKit.”

 

 

In 2009, the NIH financed a $423,500 study to find out why condom usage is so low in the United States.

 

Brings a whole new meaning to Obama’s new “Promise Zones”…

 

But for those intrigued enough… here is @OnionSlayer ‘s informative map of the world’s penis size

 

And before you freak out (the scale is in cm not inches)…


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/lSVBkD6thrw/story01.htm Tyler Durden

Government Spent $224,863 On "Custom-Fit" Condoms

Money well-spent, we are sure some would suggest; but when the National Institute of Health spends $224,863 to test 95 “custom-fitted” condoms so every hard-working American man can choose the one that fits ‘just right’, we suggest the government is stretching the tax dollar a little too far. As NY Post reports, the study was prompted by concern that despite the wide-scale promotion of latex condoms to help prevent the spread of HIV, their use remains “disappointingly low,” because, the government says, one-third to one-half of men complain of poor-fitting prophylactics and are less likely to use them… apparently. Of course, we assume, when questioned, all said the condom was ‘too small’.

 

Via NY Post,

 

The NIH blames US “regulatory guidelines” for American men having to choose from a “narrow range of condom sizes.”

 

The six-figure grant was awarded to TheyFit of Covington, Ga., which offers a wide variety of condoms that vary in length — from a bit more than 3 inches to nearly 9 ¹/? — and in width.

 

They’re available in European Union countries, but not in the United States, where they would have to be approved by the Food and Drug Administration.

 

“For most of their existence, condoms were custom fitted,” TheyFit explains on its Web site.

 

“For hundreds of years, until the early part of the 20th century, they were made of linen or animal gut fitted to over individual penis sizes.”

 

But the introduction of latex, mass production of condoms and other factors created what the firm calls “the ‘one size fits all’ condom.”

 

For the man who doesn’t know his own penis size, TheyFit offers a free downloadable “FitKit.”

 

 

In 2009, the NIH financed a $423,500 study to find out why condom usage is so low in the United States.

 

Brings a whole new meaning to Obama’s new “Promise Zones”…

 

But for those intrigued enough… here is @OnionSlayer ‘s informative map of the world’s penis size

 

And before you freak out (the scale is in cm not inches)…


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/lSVBkD6thrw/story01.htm Tyler Durden

From Non-GAAP To Non-Sense: David Stockman Slams The “Earnings Ex-Items” Smoke-Screen

We noted on Thursday, when Alcoa reported, that "non-recurring, one-time" charges are anything but; indicating just how freely the company abuses the non-GAAP EPS definition, and how adding back charges has become ordinary course of business. But it's not just Alcoa, and as David Stockman, author The Gret Deformation, notes Wall Street’s institutionalized fiddle of GAAP earnings made P/E multiples appear far lower than they actually are, and thereby helps perpetuate the myth that the market is "cheap."

 

Via David Stockman,

THE “EARNINGS EX-ITEMS” SMOKE SCREEN

One of the reasons that the monetary politburo was unconcerned about the blatant buying of earnings through financial engineering is that it fully subscribed to the gussied-up version of EPS peddled by Wall Street. The latter was known as “operating earnings” or “earning ex-items,” and it was derived by removing from the GAAP (generally accepted accounting principles)- based financial statements filed with the SEC any and all items which could be characterized as “one-time” or “nonrecurring.”

These adjustments included asset write-downs, goodwill write-offs, and most especially “restructuring” charges to cover the cost of head-count reductions, including severance payments. Needless to say, in an environment in which labor was expensive and debt was cheap, successive waves of corporate downsizings could be undertaken without the inconvenience of a pox on earnings due to severance costs; these charges were “one time” and to be ignored by investors.

Likewise, there was no problem with the high failure rate of M&A deals. In due course, dumb investments could be written off and the resulting losses wouldn’t “count” in earnings ex-items.

In short, Wall Street’s institutionalized fiddle of GAAP earnings made PE multiples appear far lower than they actually were, and thereby helped perpetuate the myth that the market was “cheap” during the second Greenspan stock market bubble. Thus, as the S&P 500 index reached its nosebleed peaks around 1,500 in mid-2007, Wall Street urged investors not to worry because the PE multiple was within its historic range.

In fact, the 500 S&P companies recorded net income ex-items of $730 billion in 2007 relative to an average market cap during the year of $13 trillion. The implied PE multiple of 18X was not over the top, but then it wasn’t on the level, either. The S&P 500 actually reported GAAP net income that year of only $587 billion, a figure that was 20 percent lower owing to the exclusion of $144 billion of charges and expenses that were deemed “nonrecurring.” The actual PE multiple on GAAP net income was 22X, however, and that was expensive by any historic standard, and especially at the very top of the business cycle.

During 2008 came the real proof of the pudding. Corporations took a staggering $304 billion in write-downs for assets which were drastically overvalued and business operations which were hopelessly unprofitable. Accordingly, reported GAAP net income for the S&P 500 plunged to just $132 billion, meaning that during the course of the year the average market cap of $10 trillion represented 77X net income.

To be sure, after the financial crisis cooled off the span narrowed considerably between GAAP legal earnings and the Wall Street “ex-items” rendition of profits, and not surprisingly in light of how much was thrown into the kitchen sink in the fourth quarter of 2008. Even after this alleged house cleaning, however, more than $100 billion of charges and expenses were excluded from Wall Street’s reckoning of the presumptively “clean” S&P earnings reported for both 2009 and 2010.

So, if the four years are taken as a whole, the scam is readily evident. During this period, Wall Street claimed that the S&P 500 posted cumulative net income of $2.42 trillion. In fact, CEOs and CFOs required to sign the Sarbanes-Oxley statements didn’t see it that way. They reported net income of $1.87 trillion. The difference was accounted for by an astounding $550 billion in corporate losses that the nation’s accounting profession insisted were real, and that had been reported because the nation’s securities cops would have sent out the paddy wagons had they not been.

During the four-year round trip from peak-to-bust-to-recovery, the S&P 500 had thus traded at an average market cap of $10.6 trillion, representing nearly twenty-three times the average GAAP earnings reported during that period. Not only was that not “cheap” by any reasonable standard, but it was also indicative of the delusions and deformations that the Fed’s bubble finance had injected into the stock market.

In fact, every dollar of the $550 billion of charges during 2007–2010 that Wall Street chose not to count represented destruction of shareholder value. When companies chronically overpaid for M&A deals, and then four years later wrote off the goodwill, that was an “ex-item” in the Wall Street version of earnings, but still cold corporate cash that had gone down the drain. The same was true with equipment and machinery write-off when plants were shut down or leases written off when stores were closed. Most certainly, there was destruction of value when tens of billions were paid out for severance, health care, and pensions during the waves of headcount reductions.

To be sure, some of these charges represented economically efficient actions under any circumstances; that is, when the Schumpeterian mechanism of creative destruction was at work. The giant disconnect, however, is that these actions and the resulting charges to GAAP income statements were not in the least “one time.” Instead, they were part of the recurring cost of doing business in the hot-house economy of interest rate repression, central bank puts, rampant financial speculation, and mercantilist global trade that arose from the events of August 1971.

The economic cost of business mistakes, restructurings, and balance sheet house cleaning can be readily averaged and smoothed, an appropriate accounting treatment because these costs are real and recurring. Accordingly, the four-year average experience for the 2007–2010 market cycle is illuminating.

The Wall Street “ex-item” number for S&P 500 net income during that period overstated honest accounting profits by an astonishing 30 percent. Stated differently, the time-weighted PE multiple on an ex-items basis was already at an exuberant 17.6X. In truth, however, the market was actually valuing true GAAP earnings at nearly 23X.

This was a truly absurd capitalization rate for the earnings of a basket of giant companies domiciled in a domestic economy where economic growth was grinding to a halt. It was also a wildly excessive valuation for earnings that had been inflated by $5 trillion of business debt growth owing to buybacks, buyouts, and takeovers.


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/INDOc97iAXw/story01.htm Tyler Durden

From Non-GAAP To Non-Sense: David Stockman Slams The "Earnings Ex-Items" Smoke-Screen

We noted on Thursday, when Alcoa reported, that "non-recurring, one-time" charges are anything but; indicating just how freely the company abuses the non-GAAP EPS definition, and how adding back charges has become ordinary course of business. But it's not just Alcoa, and as David Stockman, author The Gret Deformation, notes Wall Street’s institutionalized fiddle of GAAP earnings made P/E multiples appear far lower than they actually are, and thereby helps perpetuate the myth that the market is "cheap."

 

Via David Stockman,

THE “EARNINGS EX-ITEMS” SMOKE SCREEN

One of the reasons that the monetary politburo was unconcerned about the blatant buying of earnings through financial engineering is that it fully subscribed to the gussied-up version of EPS peddled by Wall Street. The latter was known as “operating earnings” or “earning ex-items,” and it was derived by removing from the GAAP (generally accepted accounting principles)- based financial statements filed with the SEC any and all items which could be characterized as “one-time” or “nonrecurring.”

These adjustments included asset write-downs, goodwill write-offs, and most especially “restructuring” charges to cover the cost of head-count reductions, including severance payments. Needless to say, in an environment in which labor was expensive and debt was cheap, successive waves of corporate downsizings could be undertaken without the inconvenience of a pox on earnings due to severance costs; these charges were “one time” and to be ignored by investors.

Likewise, there was no problem with the high failure rate of M&A deals. In due course, dumb investments could be written off and the resulting losses wouldn’t “count” in earnings ex-items.

In short, Wall Street’s institutionalized fiddle of GAAP earnings made PE multiples appear far lower than they actually were, and thereby helped perpetuate the myth that the market was “cheap” during the second Greenspan stock market bubble. Thus, as the S&P 500 index reached its nosebleed peaks around 1,500 in mid-2007, Wall Street urged investors not to worry because the PE multiple was within its historic range.

In fact, the 500 S&P companies recorded net income ex-items of $730 billion in 2007 relative to an average market cap during the year of $13 trillion. The implied PE multiple of 18X was not over the top, but then it wasn’t on the level, either. The S&P 500 actually reported GAAP net income that year of only $587 billion, a figure that was 20 percent lower owing to the exclusion of $144 billion of charges and expenses that were deemed “nonrecurring.” The actual PE multiple on GAAP net income was 22X, however, and that was expensive by any historic standard, and especially at the very top of the business cycle.

During 2008 came the real proof of the pudding. Corporations took a staggering $304 billion in write-downs for assets which were drastically overvalued and business operations which were hopelessly unprofitable. Accordingly, reported GAAP net income for the S&P 500 plunged to just $132 billion, meaning that during the course of the year the average market cap of $10 trillion represented 77X net income.

To be sure, after the financial crisis cooled off the span narrowed considerably between GAAP legal earnings and the Wall Street “ex-items” rendition of profits, and not surprisingly in light of how much was thrown into the kitchen sink in the fourth quarter of 2008. Even after this alleged house cleaning, however, more than $100 billion of charges and expenses were excluded from Wall Street’s reckoning of the presumptively “clean” S&P earnings reported for both 2009 and 2010.

So, if the four years are taken as a whole, the scam is readily evident. During this period, Wall Street claimed that the S&P 500 posted cumulative net income of $2.42 trillion. In fact, CEOs and CFOs required to sign the Sarbanes-Oxley statements didn’t see it that way. They reported net income of $1.87 trillion. The difference was accounted for by an astounding $550 billion in corporate losses that the nation’s accounting profession insisted were real, and that had been reported because the nation’s securities cops would have sent out the paddy wagons had they not been.

During the four-year round trip from peak-to-bust-to-recovery, the S&P 500 had thus traded at an average market cap of $10.6 trillion, representing nearly twenty-three times the average GAAP earnings reported during that period. Not only was that not “cheap” by any reasonable standard, but it was also indicative of the delusions and deformations that the Fed’s bubble finance had injected into the stock market.

In fact, every dollar of the $550 billion of charges during 2007–2010 that Wall Street chose not to count represented destruction of shareholder value. When companies chronically overpaid for M&A deals, and then four years later wrote off the goodwill, that was an “ex-item” in the Wall Street version of earnings, but still cold corporate cash that had gone down the drain. The same was true with equipment and machinery write-off when plants were shut down or leases written off when stores were closed. Most certainly, there was destruction of value when tens of billions were paid out for severance, health care, and pensions during the waves of headcount reductions.

To be sure, some of these charges represented economically efficient actions under any circumstances; that is, when the Schumpeterian mechanism of creative destruction was at work. The giant disconnect, however, is that these actions and the resulting charges to GAAP income statements were not in the least “one time.” Instead, they were part of the recurring cost of doing business in the hot-house economy of interest rate repression, central bank puts, rampant financial speculation, and mercantilist global trade that arose from the events of August 1971.

The economic cost of business mistakes, restructurings, and balance sheet house cleaning can be readily averaged and smoothed, an appropriate accounting treatment because these costs are real and recurring. Accordingly, the four-year average experience for the 2007–2010 market cycle is illuminating.

The Wall Street “ex-item” number for S&P 500 net income during that period overstated honest accounting profits by an astonishing 30 percent. Stated differently, the time-weighted PE multiple on an ex-items basis was already at an exuberant 17.6X. In truth, however, the market was actually valuing true GAAP earnings at nearly 23X.

This was a truly absurd capitalization rate for the earnings of a basket of giant companies domiciled in a domestic economy where economic growth was grinding to a halt. It was also a wildly excessive valuation for earnings that had been inflated by $5 trillion of business debt growth owing to buybacks, buyouts, and takeovers.


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/INDOc97iAXw/story01.htm Tyler Durden

Overstock’s First Day Of Bitcoin: $130,000 Sales, 840 Transactions, CEO “Stunned”

Submitted by Michael Krieger of Liberty Blitzkrieg blog,

Upon the conclusion of the Senate hearing on Bitcoin this past November, I tweeted that I thought we had entered Phase 2 of the Bitcoin story. A month later, following news that Andreessen Horowitz had led an venture capital investment of $25 million in Coinbase, I wrote:

As I tweeted at the time, I think Bitcoin began phase two of its growth and adoption cycle upon the conclusion of the Senate hearings last month (I suggest reading: My Thoughts on the Bitcoin Hearing).

 

I think phase two will be primarily characterized by two things. More mainstream adoption and ease of use, as well as increasingly large investments by venture capitalists. In the past 24 hours, we have seen evidence of both.

Phase 2 so far is going even more positively than I had expected. Overstock.com accelerated its plans to accept BTC by many months, and the early rollout has been a massive success. The company’s CEO just tweeted:

 

 

This is absolutely huge news and any retail CEO worth their salt will immediately begin to look into Bitcoin adoption.

I hope financial publications that missed the biggest financial story of 2013 continue to mock it with covers of unicorns and waterfalls. It’s the most bullish thing I can imagine.

Furthermore, the purchased items are varied…

The apparent ease of acceptance and use has spurred demand for Bitcoin itself which has pushed back above $1000…


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/wNchFC5m30c/story01.htm Tyler Durden

Overstock's First Day Of Bitcoin: $130,000 Sales, 840 Transactions, CEO "Stunned"

Submitted by Michael Krieger of Liberty Blitzkrieg blog,

Upon the conclusion of the Senate hearing on Bitcoin this past November, I tweeted that I thought we had entered Phase 2 of the Bitcoin story. A month later, following news that Andreessen Horowitz had led an venture capital investment of $25 million in Coinbase, I wrote:

As I tweeted at the time, I think Bitcoin began phase two of its growth and adoption cycle upon the conclusion of the Senate hearings last month (I suggest reading: My Thoughts on the Bitcoin Hearing).

 

I think phase two will be primarily characterized by two things. More mainstream adoption and ease of use, as well as increasingly large investments by venture capitalists. In the past 24 hours, we have seen evidence of both.

Phase 2 so far is going even more positively than I had expected. Overstock.com accelerated its plans to accept BTC by many months, and the early rollout has been a massive success. The company’s CEO just tweeted:

 

 

This is absolutely huge news and any retail CEO worth their salt will immediately begin to look into Bitcoin adoption.

I hope financial publications that missed the biggest financial story of 2013 continue to mock it with covers of unicorns and waterfalls. It’s the most bullish thing I can imagine.

Furthermore, the purchased items are varied…

The apparent ease of acceptance and use has spurred demand for Bitcoin itself which has pushed back above $1000…


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/wNchFC5m30c/story01.htm Tyler Durden

How Twitter Algos Determine Who Is Market-Moving And Who Isn’t

Now that even Bridgewater has joined the Twitter craze and is using user-generated content for real-time economic modelling, and who knows what else, the scramble to determine who has the most market-moving, and actionable, Twitter stream is on. Because with HFT algos having camped out at all the usual newswire sources: Bloomberg, Reuters, Dow Jones, etc. the scramble to find a “content edge” for market moving information has never been higher. However, that opens up a far trickier question: whose information on the fastest growing social network, one which many say may surpass Bloomberg in terms of news propagation and functionality, is credible and by implication: whose is not? Indeed, that is the $64K question. Luckily, there is an algo for that.

In a note by Castillo et al from Yahoo Research in Spain and Chile, the authors focus on automatic methods for assessing the credibility of a given set of tweets. Specifically, they analyze microblog postings related to “trending” topics, and classify them as credible or not credible, based on features extracted from them. Our results shows that there are measurable differences in the way messages propagate, that can be used to classify them automatically as credible or not credible, with precision and recall in the range of 70% to 80%.

Needless to say, the topic of social media credibility is a critical one, in part due to the voluntary anonymity of the majority of sources , the frequent error rate of named sources, the painfully subjective attributes involved in determining good and bad information, and one where discerning the credible sources has become a very lucrative business. Further from the authors:

In a recent user study, it was found that providing information to users about the estimated credibility of online content was very useful and valuable to them. In absence of this external information, perceptions of credibility online are strongly influenced by style-related attributes, including visual design, which are not directly related to the content itself. Users also may change their perception of credibility of a blog posting depending on the (supposed) gender of the author. In this light the results of the experiment described are not surprising. In the experiment, the headline of a news item was presented to users in different ways, i.e. as posted in a traditional media website, as a blog, and as a post on Twitter. Users found the same news headline significantly less credible when presented on Twitter.

 

This distrust may not be completely ungrounded. Major search engines are starting to prominently display search results from the “real-time web” (blog and microblog postings), particularly for trending topics. This has attracted spammers that use Twitter to attract visitors to (typically) web pages offering products or services. It has also increased the potential impact of orchestrated attacks that spread lies and misinformation. Twitter is currently being used as a tool for political propaganda. Misinformation can also be spread unwillingly. For instance, on November 2010 the Twitter account of the presidential adviser for disaster management of Indonesia was hacked. The hacker then used the account to post a false tsunami warning. On January 2011 rumors of a shooting in the Oxford Circus in London, spread rapidly through Twitter. A large collection of screenshots of those tweets can be found online.

 

Recently, the Truthy service from researchers at Indiana University, has started to collect, analyze and visualize the spread of tweets belonging to “trending topics”. Features collected from the tweets are used to compute a truthiness score for a set of tweets. Those sets with low truthiness score are more likely to be part of a campaign to deceive users. Instead, in our work we do not focus specifically on detecting willful deception, but look for factors that can be used to automatically approximate users’ perceptions of credibility.

The study’s conclusion: “we have shown that for messages about time-sensitive topics, we can separate automatically newsworthy topics from other types of conversations. Among several other features, newsworthy topics tend to include URLs and to have deep propagation trees. We also show that we can assess automatically the level of social media credibility of newsworthy topics. Among several other features, credible news are propagated through authors that have previously written a large number of messages, originate at a single or a few users in the network, and have many re-posts.”

All of the above is largely known. What isn’t, however, is the mostly generic matrix used by various electronic and algorithmic sources to determine who is real and who isn’t, and thus who is market moving and who, well, ins’t. Once again, courtesy of Castillo, one can determine how the filtering algo operates, (and thus reverse engineer it). So without further ado, here is the set of features used by Twitter truth-seekers everywhere.

Those are the variables. And as for the decision tree that leads an algo to conclude if a source’s data can be trusted and thus acted upon, here is the full decision tree. First in summary:

As the decision tree shows, the top features for this task were the following:

  • Topic-based features: the fraction of tweets having an URL is the root of the tree. Sentiment-based features like fraction of negative sentiment or fraction of tweets with an exclamation mark correspond to the following relevant features, very close to the root. In particular we can observe two very simple classification rules, tweets which do not include URLs tend to be related to non-credible news. On the other hand, tweets which include negative sentiment terms are related to credible news. Something similar occurs when people use positive sentiment terms: a low fraction of tweets with positive sentiment terms tend to be related to noncredible news.
  • User-based features: these collection of features is very relevant for this task. Notice that low credible news are mostly propagated by users who have not written many messages in the past. The number of friends is also a feature that is very close to the root.
  • Propagation-based features: the maximum level size of the RT tree is also a relevant feature for this task. Tweets with many re-tweets are related to credible news.

These results show that textual information is very relevant for this task. Opinions or subjective expressions describe people’s sentiments or perceptions about a given topic or event. Opinions are also important for this task that allow to detect the community perception about the credibility of an event. On the other hand, user-based features are indicators of the reputation of the users. Messages propagated trough credible users (active users with a significant number of connections) are seen as highly credible. Thus, those users tend to propagate credible news suggesting that the Twitter community works like a social filter.

And visually:

Get to the very bottom of the tree without spooking too many algos, and you too can have a Carl Icahn-like impact on the stock of your choosing.

Source: Information Credibility on Twitter


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/e5RNXug43FA/story01.htm Tyler Durden

How Twitter Algos Determine Who Is Market-Moving And Who Isn't

Now that even Bridgewater has joined the Twitter craze and is using user-generated content for real-time economic modelling, and who knows what else, the scramble to determine who has the most market-moving, and actionable, Twitter stream is on. Because with HFT algos having camped out at all the usual newswire sources: Bloomberg, Reuters, Dow Jones, etc. the scramble to find a “content edge” for market moving information has never been higher. However, that opens up a far trickier question: whose information on the fastest growing social network, one which many say may surpass Bloomberg in terms of news propagation and functionality, is credible and by implication: whose is not? Indeed, that is the $64K question. Luckily, there is an algo for that.

In a note by Castillo et al from Yahoo Research in Spain and Chile, the authors focus on automatic methods for assessing the credibility of a given set of tweets. Specifically, they analyze microblog postings related to “trending” topics, and classify them as credible or not credible, based on features extracted from them. Our results shows that there are measurable differences in the way messages propagate, that can be used to classify them automatically as credible or not credible, with precision and recall in the range of 70% to 80%.

Needless to say, the topic of social media credibility is a critical one, in part due to the voluntary anonymity of the majority of sources , the frequent error rate of named sources, the painfully subjective attributes involved in determining good and bad information, and one where discerning the credible sources has become a very lucrative business. Further from the authors:

In a recent user study, it was found that providing information to users about the estimated credibility of online content was very useful and valuable to them. In absence of this external information, perceptions of credibility online are strongly influenced by style-related attributes, including visual design, which are not directly related to the content itself. Users also may change their perception of credibility of a blog posting depending on the (supposed) gender of the author. In this light the results of the experiment described are not surprising. In the experiment, the headline of a news item was presented to users in different ways, i.e. as posted in a traditional media website, as a blog, and as a post on Twitter. Users found the same news headline significantly less credible when presented on Twitter.

 

This distrust may not be completely ungrounded. Major search engines are starting to prominently display search results from the “real-time web” (blog and microblog postings), particularly for trending topics. This has attracted spammers that use Twitter to attract visitors to (typically) web pages offering products or services. It has also increased the potential impact of orchestrated attacks that spread lies and misinformation. Twitter is currently being used as a tool for political propaganda. Misinformation can also be spread unwillingly. For instance, on November 2010 the Twitter account of the presidential adviser for disaster management of Indonesia was hacked. The hacker then used the account to post a false tsunami warning. On January 2011 rumors of a shooting in the Oxford Circus in London, spread rapidly through Twitter. A large collection of screenshots of those tweets can be found online.

 

Recently, the Truthy service from researchers at Indiana University, has started to collect, analyze and visualize the spread of tweets belonging to “trending topics”. Features collected from the tweets are used to compute a truthiness score for a set of tweets. Those sets with low truthiness score are more likely to be part of a campaign to deceive users. Instead, in our work we do not focus specifically on detecting willful deception, but look for factors that can be used to automatically approximate users’ perceptions of credibility.

The study’s conclusion: “we have shown that for messages about time-sensitive topics, we can separate automatically newsworthy topics from other types of conversations. Among several other features, newsworthy topics tend to include URLs and to have deep propagation trees. We also show that we can assess automatically the level of social media credibility of newsworthy topics. Among several other features, credible news are propagated through authors that have previously written a large number of messages, originate at a single or a few users in the network, and have many re-posts.”

All of the above is largely known. What isn’t, however, is the mostly generic matrix used by various electronic and algorithmic sources to determine who is real and who isn’t, and thus who is market moving and who, well, ins’t. Once again, courtesy of Castillo, one can determine how the filtering algo operates, (and thus reverse engineer it). So without further ado, here is the set of features used by Twitter truth-seekers everywhere.

Those are the variables. And as for the decision tree that leads an algo to conclude if a source’s data can be trusted and thus acted upon, here is the full decision tree. First in summary:

As the decision tree shows, the top features for this task were the following:

  • Topic-based features: the fraction of tweets having an URL is the root of the tree. Sentiment-based features like fraction of negative sentiment or fraction of tweets with an exclamation mark correspond to the following relevant features, very close to the root. In particular we can observe two very simple classification rules, tweets which do not include URLs tend to be related to non-credible news. On the other hand, tweets which include negative sentiment terms are related to credible news. Something similar occurs when people use positive sentiment terms: a low fraction of tweets with positive sentiment terms tend to be related to noncredible news.
  • User-based features: these collection of features is very relevant for this task. Notice that low credible news are mostly propagated by users who have not written many messages in the past. The number of friends is also a feature that is very close to the root.
  • Propagation-based features: the maximum level size of the RT tree is also a relevant feature for this task. Tweets with many re-tweets are related to credible news.

These results show that textual information is very relevant for this task. Opinions or subjective expressions describe people’s sentiments or perceptions about a given topic or event. Opinions are also important for this task that allow to detect the community perception about the credibility of an event. On the other hand, user-based features are indicators of the reputation of the users. Messages propagated trough credible users (active users with a significant number of connections) are seen as highly credible. Thus, those users tend to propagate credible news suggesting that the Twitter community works like a social filter.

And visually:

Get to the very bottom of the tree without spooking too many algos, and you too can have a
Carl Icahn-like impact on the stock of your choosing.

Source: Information Credibility on Twitter


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/e5RNXug43FA/story01.htm Tyler Durden