From Non-GAAP To Non-Sense: David Stockman Slams The “Earnings Ex-Items” Smoke-Screen

We noted on Thursday, when Alcoa reported, that "non-recurring, one-time" charges are anything but; indicating just how freely the company abuses the non-GAAP EPS definition, and how adding back charges has become ordinary course of business. But it's not just Alcoa, and as David Stockman, author The Gret Deformation, notes Wall Street’s institutionalized fiddle of GAAP earnings made P/E multiples appear far lower than they actually are, and thereby helps perpetuate the myth that the market is "cheap."

 

Via David Stockman,

THE “EARNINGS EX-ITEMS” SMOKE SCREEN

One of the reasons that the monetary politburo was unconcerned about the blatant buying of earnings through financial engineering is that it fully subscribed to the gussied-up version of EPS peddled by Wall Street. The latter was known as “operating earnings” or “earning ex-items,” and it was derived by removing from the GAAP (generally accepted accounting principles)- based financial statements filed with the SEC any and all items which could be characterized as “one-time” or “nonrecurring.”

These adjustments included asset write-downs, goodwill write-offs, and most especially “restructuring” charges to cover the cost of head-count reductions, including severance payments. Needless to say, in an environment in which labor was expensive and debt was cheap, successive waves of corporate downsizings could be undertaken without the inconvenience of a pox on earnings due to severance costs; these charges were “one time” and to be ignored by investors.

Likewise, there was no problem with the high failure rate of M&A deals. In due course, dumb investments could be written off and the resulting losses wouldn’t “count” in earnings ex-items.

In short, Wall Street’s institutionalized fiddle of GAAP earnings made PE multiples appear far lower than they actually were, and thereby helped perpetuate the myth that the market was “cheap” during the second Greenspan stock market bubble. Thus, as the S&P 500 index reached its nosebleed peaks around 1,500 in mid-2007, Wall Street urged investors not to worry because the PE multiple was within its historic range.

In fact, the 500 S&P companies recorded net income ex-items of $730 billion in 2007 relative to an average market cap during the year of $13 trillion. The implied PE multiple of 18X was not over the top, but then it wasn’t on the level, either. The S&P 500 actually reported GAAP net income that year of only $587 billion, a figure that was 20 percent lower owing to the exclusion of $144 billion of charges and expenses that were deemed “nonrecurring.” The actual PE multiple on GAAP net income was 22X, however, and that was expensive by any historic standard, and especially at the very top of the business cycle.

During 2008 came the real proof of the pudding. Corporations took a staggering $304 billion in write-downs for assets which were drastically overvalued and business operations which were hopelessly unprofitable. Accordingly, reported GAAP net income for the S&P 500 plunged to just $132 billion, meaning that during the course of the year the average market cap of $10 trillion represented 77X net income.

To be sure, after the financial crisis cooled off the span narrowed considerably between GAAP legal earnings and the Wall Street “ex-items” rendition of profits, and not surprisingly in light of how much was thrown into the kitchen sink in the fourth quarter of 2008. Even after this alleged house cleaning, however, more than $100 billion of charges and expenses were excluded from Wall Street’s reckoning of the presumptively “clean” S&P earnings reported for both 2009 and 2010.

So, if the four years are taken as a whole, the scam is readily evident. During this period, Wall Street claimed that the S&P 500 posted cumulative net income of $2.42 trillion. In fact, CEOs and CFOs required to sign the Sarbanes-Oxley statements didn’t see it that way. They reported net income of $1.87 trillion. The difference was accounted for by an astounding $550 billion in corporate losses that the nation’s accounting profession insisted were real, and that had been reported because the nation’s securities cops would have sent out the paddy wagons had they not been.

During the four-year round trip from peak-to-bust-to-recovery, the S&P 500 had thus traded at an average market cap of $10.6 trillion, representing nearly twenty-three times the average GAAP earnings reported during that period. Not only was that not “cheap” by any reasonable standard, but it was also indicative of the delusions and deformations that the Fed’s bubble finance had injected into the stock market.

In fact, every dollar of the $550 billion of charges during 2007–2010 that Wall Street chose not to count represented destruction of shareholder value. When companies chronically overpaid for M&A deals, and then four years later wrote off the goodwill, that was an “ex-item” in the Wall Street version of earnings, but still cold corporate cash that had gone down the drain. The same was true with equipment and machinery write-off when plants were shut down or leases written off when stores were closed. Most certainly, there was destruction of value when tens of billions were paid out for severance, health care, and pensions during the waves of headcount reductions.

To be sure, some of these charges represented economically efficient actions under any circumstances; that is, when the Schumpeterian mechanism of creative destruction was at work. The giant disconnect, however, is that these actions and the resulting charges to GAAP income statements were not in the least “one time.” Instead, they were part of the recurring cost of doing business in the hot-house economy of interest rate repression, central bank puts, rampant financial speculation, and mercantilist global trade that arose from the events of August 1971.

The economic cost of business mistakes, restructurings, and balance sheet house cleaning can be readily averaged and smoothed, an appropriate accounting treatment because these costs are real and recurring. Accordingly, the four-year average experience for the 2007–2010 market cycle is illuminating.

The Wall Street “ex-item” number for S&P 500 net income during that period overstated honest accounting profits by an astonishing 30 percent. Stated differently, the time-weighted PE multiple on an ex-items basis was already at an exuberant 17.6X. In truth, however, the market was actually valuing true GAAP earnings at nearly 23X.

This was a truly absurd capitalization rate for the earnings of a basket of giant companies domiciled in a domestic economy where economic growth was grinding to a halt. It was also a wildly excessive valuation for earnings that had been inflated by $5 trillion of business debt growth owing to buybacks, buyouts, and takeovers.


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/INDOc97iAXw/story01.htm Tyler Durden

From Non-GAAP To Non-Sense: David Stockman Slams The "Earnings Ex-Items" Smoke-Screen

We noted on Thursday, when Alcoa reported, that "non-recurring, one-time" charges are anything but; indicating just how freely the company abuses the non-GAAP EPS definition, and how adding back charges has become ordinary course of business. But it's not just Alcoa, and as David Stockman, author The Gret Deformation, notes Wall Street’s institutionalized fiddle of GAAP earnings made P/E multiples appear far lower than they actually are, and thereby helps perpetuate the myth that the market is "cheap."

 

Via David Stockman,

THE “EARNINGS EX-ITEMS” SMOKE SCREEN

One of the reasons that the monetary politburo was unconcerned about the blatant buying of earnings through financial engineering is that it fully subscribed to the gussied-up version of EPS peddled by Wall Street. The latter was known as “operating earnings” or “earning ex-items,” and it was derived by removing from the GAAP (generally accepted accounting principles)- based financial statements filed with the SEC any and all items which could be characterized as “one-time” or “nonrecurring.”

These adjustments included asset write-downs, goodwill write-offs, and most especially “restructuring” charges to cover the cost of head-count reductions, including severance payments. Needless to say, in an environment in which labor was expensive and debt was cheap, successive waves of corporate downsizings could be undertaken without the inconvenience of a pox on earnings due to severance costs; these charges were “one time” and to be ignored by investors.

Likewise, there was no problem with the high failure rate of M&A deals. In due course, dumb investments could be written off and the resulting losses wouldn’t “count” in earnings ex-items.

In short, Wall Street’s institutionalized fiddle of GAAP earnings made PE multiples appear far lower than they actually were, and thereby helped perpetuate the myth that the market was “cheap” during the second Greenspan stock market bubble. Thus, as the S&P 500 index reached its nosebleed peaks around 1,500 in mid-2007, Wall Street urged investors not to worry because the PE multiple was within its historic range.

In fact, the 500 S&P companies recorded net income ex-items of $730 billion in 2007 relative to an average market cap during the year of $13 trillion. The implied PE multiple of 18X was not over the top, but then it wasn’t on the level, either. The S&P 500 actually reported GAAP net income that year of only $587 billion, a figure that was 20 percent lower owing to the exclusion of $144 billion of charges and expenses that were deemed “nonrecurring.” The actual PE multiple on GAAP net income was 22X, however, and that was expensive by any historic standard, and especially at the very top of the business cycle.

During 2008 came the real proof of the pudding. Corporations took a staggering $304 billion in write-downs for assets which were drastically overvalued and business operations which were hopelessly unprofitable. Accordingly, reported GAAP net income for the S&P 500 plunged to just $132 billion, meaning that during the course of the year the average market cap of $10 trillion represented 77X net income.

To be sure, after the financial crisis cooled off the span narrowed considerably between GAAP legal earnings and the Wall Street “ex-items” rendition of profits, and not surprisingly in light of how much was thrown into the kitchen sink in the fourth quarter of 2008. Even after this alleged house cleaning, however, more than $100 billion of charges and expenses were excluded from Wall Street’s reckoning of the presumptively “clean” S&P earnings reported for both 2009 and 2010.

So, if the four years are taken as a whole, the scam is readily evident. During this period, Wall Street claimed that the S&P 500 posted cumulative net income of $2.42 trillion. In fact, CEOs and CFOs required to sign the Sarbanes-Oxley statements didn’t see it that way. They reported net income of $1.87 trillion. The difference was accounted for by an astounding $550 billion in corporate losses that the nation’s accounting profession insisted were real, and that had been reported because the nation’s securities cops would have sent out the paddy wagons had they not been.

During the four-year round trip from peak-to-bust-to-recovery, the S&P 500 had thus traded at an average market cap of $10.6 trillion, representing nearly twenty-three times the average GAAP earnings reported during that period. Not only was that not “cheap” by any reasonable standard, but it was also indicative of the delusions and deformations that the Fed’s bubble finance had injected into the stock market.

In fact, every dollar of the $550 billion of charges during 2007–2010 that Wall Street chose not to count represented destruction of shareholder value. When companies chronically overpaid for M&A deals, and then four years later wrote off the goodwill, that was an “ex-item” in the Wall Street version of earnings, but still cold corporate cash that had gone down the drain. The same was true with equipment and machinery write-off when plants were shut down or leases written off when stores were closed. Most certainly, there was destruction of value when tens of billions were paid out for severance, health care, and pensions during the waves of headcount reductions.

To be sure, some of these charges represented economically efficient actions under any circumstances; that is, when the Schumpeterian mechanism of creative destruction was at work. The giant disconnect, however, is that these actions and the resulting charges to GAAP income statements were not in the least “one time.” Instead, they were part of the recurring cost of doing business in the hot-house economy of interest rate repression, central bank puts, rampant financial speculation, and mercantilist global trade that arose from the events of August 1971.

The economic cost of business mistakes, restructurings, and balance sheet house cleaning can be readily averaged and smoothed, an appropriate accounting treatment because these costs are real and recurring. Accordingly, the four-year average experience for the 2007–2010 market cycle is illuminating.

The Wall Street “ex-item” number for S&P 500 net income during that period overstated honest accounting profits by an astonishing 30 percent. Stated differently, the time-weighted PE multiple on an ex-items basis was already at an exuberant 17.6X. In truth, however, the market was actually valuing true GAAP earnings at nearly 23X.

This was a truly absurd capitalization rate for the earnings of a basket of giant companies domiciled in a domestic economy where economic growth was grinding to a halt. It was also a wildly excessive valuation for earnings that had been inflated by $5 trillion of business debt growth owing to buybacks, buyouts, and takeovers.


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/INDOc97iAXw/story01.htm Tyler Durden

Overstock’s First Day Of Bitcoin: $130,000 Sales, 840 Transactions, CEO “Stunned”

Submitted by Michael Krieger of Liberty Blitzkrieg blog,

Upon the conclusion of the Senate hearing on Bitcoin this past November, I tweeted that I thought we had entered Phase 2 of the Bitcoin story. A month later, following news that Andreessen Horowitz had led an venture capital investment of $25 million in Coinbase, I wrote:

As I tweeted at the time, I think Bitcoin began phase two of its growth and adoption cycle upon the conclusion of the Senate hearings last month (I suggest reading: My Thoughts on the Bitcoin Hearing).

 

I think phase two will be primarily characterized by two things. More mainstream adoption and ease of use, as well as increasingly large investments by venture capitalists. In the past 24 hours, we have seen evidence of both.

Phase 2 so far is going even more positively than I had expected. Overstock.com accelerated its plans to accept BTC by many months, and the early rollout has been a massive success. The company’s CEO just tweeted:

 

 

This is absolutely huge news and any retail CEO worth their salt will immediately begin to look into Bitcoin adoption.

I hope financial publications that missed the biggest financial story of 2013 continue to mock it with covers of unicorns and waterfalls. It’s the most bullish thing I can imagine.

Furthermore, the purchased items are varied…

The apparent ease of acceptance and use has spurred demand for Bitcoin itself which has pushed back above $1000…


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/wNchFC5m30c/story01.htm Tyler Durden

Overstock's First Day Of Bitcoin: $130,000 Sales, 840 Transactions, CEO "Stunned"

Submitted by Michael Krieger of Liberty Blitzkrieg blog,

Upon the conclusion of the Senate hearing on Bitcoin this past November, I tweeted that I thought we had entered Phase 2 of the Bitcoin story. A month later, following news that Andreessen Horowitz had led an venture capital investment of $25 million in Coinbase, I wrote:

As I tweeted at the time, I think Bitcoin began phase two of its growth and adoption cycle upon the conclusion of the Senate hearings last month (I suggest reading: My Thoughts on the Bitcoin Hearing).

 

I think phase two will be primarily characterized by two things. More mainstream adoption and ease of use, as well as increasingly large investments by venture capitalists. In the past 24 hours, we have seen evidence of both.

Phase 2 so far is going even more positively than I had expected. Overstock.com accelerated its plans to accept BTC by many months, and the early rollout has been a massive success. The company’s CEO just tweeted:

 

 

This is absolutely huge news and any retail CEO worth their salt will immediately begin to look into Bitcoin adoption.

I hope financial publications that missed the biggest financial story of 2013 continue to mock it with covers of unicorns and waterfalls. It’s the most bullish thing I can imagine.

Furthermore, the purchased items are varied…

The apparent ease of acceptance and use has spurred demand for Bitcoin itself which has pushed back above $1000…


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/wNchFC5m30c/story01.htm Tyler Durden

How Twitter Algos Determine Who Is Market-Moving And Who Isn’t

Now that even Bridgewater has joined the Twitter craze and is using user-generated content for real-time economic modelling, and who knows what else, the scramble to determine who has the most market-moving, and actionable, Twitter stream is on. Because with HFT algos having camped out at all the usual newswire sources: Bloomberg, Reuters, Dow Jones, etc. the scramble to find a “content edge” for market moving information has never been higher. However, that opens up a far trickier question: whose information on the fastest growing social network, one which many say may surpass Bloomberg in terms of news propagation and functionality, is credible and by implication: whose is not? Indeed, that is the $64K question. Luckily, there is an algo for that.

In a note by Castillo et al from Yahoo Research in Spain and Chile, the authors focus on automatic methods for assessing the credibility of a given set of tweets. Specifically, they analyze microblog postings related to “trending” topics, and classify them as credible or not credible, based on features extracted from them. Our results shows that there are measurable differences in the way messages propagate, that can be used to classify them automatically as credible or not credible, with precision and recall in the range of 70% to 80%.

Needless to say, the topic of social media credibility is a critical one, in part due to the voluntary anonymity of the majority of sources , the frequent error rate of named sources, the painfully subjective attributes involved in determining good and bad information, and one where discerning the credible sources has become a very lucrative business. Further from the authors:

In a recent user study, it was found that providing information to users about the estimated credibility of online content was very useful and valuable to them. In absence of this external information, perceptions of credibility online are strongly influenced by style-related attributes, including visual design, which are not directly related to the content itself. Users also may change their perception of credibility of a blog posting depending on the (supposed) gender of the author. In this light the results of the experiment described are not surprising. In the experiment, the headline of a news item was presented to users in different ways, i.e. as posted in a traditional media website, as a blog, and as a post on Twitter. Users found the same news headline significantly less credible when presented on Twitter.

 

This distrust may not be completely ungrounded. Major search engines are starting to prominently display search results from the “real-time web” (blog and microblog postings), particularly for trending topics. This has attracted spammers that use Twitter to attract visitors to (typically) web pages offering products or services. It has also increased the potential impact of orchestrated attacks that spread lies and misinformation. Twitter is currently being used as a tool for political propaganda. Misinformation can also be spread unwillingly. For instance, on November 2010 the Twitter account of the presidential adviser for disaster management of Indonesia was hacked. The hacker then used the account to post a false tsunami warning. On January 2011 rumors of a shooting in the Oxford Circus in London, spread rapidly through Twitter. A large collection of screenshots of those tweets can be found online.

 

Recently, the Truthy service from researchers at Indiana University, has started to collect, analyze and visualize the spread of tweets belonging to “trending topics”. Features collected from the tweets are used to compute a truthiness score for a set of tweets. Those sets with low truthiness score are more likely to be part of a campaign to deceive users. Instead, in our work we do not focus specifically on detecting willful deception, but look for factors that can be used to automatically approximate users’ perceptions of credibility.

The study’s conclusion: “we have shown that for messages about time-sensitive topics, we can separate automatically newsworthy topics from other types of conversations. Among several other features, newsworthy topics tend to include URLs and to have deep propagation trees. We also show that we can assess automatically the level of social media credibility of newsworthy topics. Among several other features, credible news are propagated through authors that have previously written a large number of messages, originate at a single or a few users in the network, and have many re-posts.”

All of the above is largely known. What isn’t, however, is the mostly generic matrix used by various electronic and algorithmic sources to determine who is real and who isn’t, and thus who is market moving and who, well, ins’t. Once again, courtesy of Castillo, one can determine how the filtering algo operates, (and thus reverse engineer it). So without further ado, here is the set of features used by Twitter truth-seekers everywhere.

Those are the variables. And as for the decision tree that leads an algo to conclude if a source’s data can be trusted and thus acted upon, here is the full decision tree. First in summary:

As the decision tree shows, the top features for this task were the following:

  • Topic-based features: the fraction of tweets having an URL is the root of the tree. Sentiment-based features like fraction of negative sentiment or fraction of tweets with an exclamation mark correspond to the following relevant features, very close to the root. In particular we can observe two very simple classification rules, tweets which do not include URLs tend to be related to non-credible news. On the other hand, tweets which include negative sentiment terms are related to credible news. Something similar occurs when people use positive sentiment terms: a low fraction of tweets with positive sentiment terms tend to be related to noncredible news.
  • User-based features: these collection of features is very relevant for this task. Notice that low credible news are mostly propagated by users who have not written many messages in the past. The number of friends is also a feature that is very close to the root.
  • Propagation-based features: the maximum level size of the RT tree is also a relevant feature for this task. Tweets with many re-tweets are related to credible news.

These results show that textual information is very relevant for this task. Opinions or subjective expressions describe people’s sentiments or perceptions about a given topic or event. Opinions are also important for this task that allow to detect the community perception about the credibility of an event. On the other hand, user-based features are indicators of the reputation of the users. Messages propagated trough credible users (active users with a significant number of connections) are seen as highly credible. Thus, those users tend to propagate credible news suggesting that the Twitter community works like a social filter.

And visually:

Get to the very bottom of the tree without spooking too many algos, and you too can have a Carl Icahn-like impact on the stock of your choosing.

Source: Information Credibility on Twitter


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/e5RNXug43FA/story01.htm Tyler Durden

How Twitter Algos Determine Who Is Market-Moving And Who Isn't

Now that even Bridgewater has joined the Twitter craze and is using user-generated content for real-time economic modelling, and who knows what else, the scramble to determine who has the most market-moving, and actionable, Twitter stream is on. Because with HFT algos having camped out at all the usual newswire sources: Bloomberg, Reuters, Dow Jones, etc. the scramble to find a “content edge” for market moving information has never been higher. However, that opens up a far trickier question: whose information on the fastest growing social network, one which many say may surpass Bloomberg in terms of news propagation and functionality, is credible and by implication: whose is not? Indeed, that is the $64K question. Luckily, there is an algo for that.

In a note by Castillo et al from Yahoo Research in Spain and Chile, the authors focus on automatic methods for assessing the credibility of a given set of tweets. Specifically, they analyze microblog postings related to “trending” topics, and classify them as credible or not credible, based on features extracted from them. Our results shows that there are measurable differences in the way messages propagate, that can be used to classify them automatically as credible or not credible, with precision and recall in the range of 70% to 80%.

Needless to say, the topic of social media credibility is a critical one, in part due to the voluntary anonymity of the majority of sources , the frequent error rate of named sources, the painfully subjective attributes involved in determining good and bad information, and one where discerning the credible sources has become a very lucrative business. Further from the authors:

In a recent user study, it was found that providing information to users about the estimated credibility of online content was very useful and valuable to them. In absence of this external information, perceptions of credibility online are strongly influenced by style-related attributes, including visual design, which are not directly related to the content itself. Users also may change their perception of credibility of a blog posting depending on the (supposed) gender of the author. In this light the results of the experiment described are not surprising. In the experiment, the headline of a news item was presented to users in different ways, i.e. as posted in a traditional media website, as a blog, and as a post on Twitter. Users found the same news headline significantly less credible when presented on Twitter.

 

This distrust may not be completely ungrounded. Major search engines are starting to prominently display search results from the “real-time web” (blog and microblog postings), particularly for trending topics. This has attracted spammers that use Twitter to attract visitors to (typically) web pages offering products or services. It has also increased the potential impact of orchestrated attacks that spread lies and misinformation. Twitter is currently being used as a tool for political propaganda. Misinformation can also be spread unwillingly. For instance, on November 2010 the Twitter account of the presidential adviser for disaster management of Indonesia was hacked. The hacker then used the account to post a false tsunami warning. On January 2011 rumors of a shooting in the Oxford Circus in London, spread rapidly through Twitter. A large collection of screenshots of those tweets can be found online.

 

Recently, the Truthy service from researchers at Indiana University, has started to collect, analyze and visualize the spread of tweets belonging to “trending topics”. Features collected from the tweets are used to compute a truthiness score for a set of tweets. Those sets with low truthiness score are more likely to be part of a campaign to deceive users. Instead, in our work we do not focus specifically on detecting willful deception, but look for factors that can be used to automatically approximate users’ perceptions of credibility.

The study’s conclusion: “we have shown that for messages about time-sensitive topics, we can separate automatically newsworthy topics from other types of conversations. Among several other features, newsworthy topics tend to include URLs and to have deep propagation trees. We also show that we can assess automatically the level of social media credibility of newsworthy topics. Among several other features, credible news are propagated through authors that have previously written a large number of messages, originate at a single or a few users in the network, and have many re-posts.”

All of the above is largely known. What isn’t, however, is the mostly generic matrix used by various electronic and algorithmic sources to determine who is real and who isn’t, and thus who is market moving and who, well, ins’t. Once again, courtesy of Castillo, one can determine how the filtering algo operates, (and thus reverse engineer it). So without further ado, here is the set of features used by Twitter truth-seekers everywhere.

Those are the variables. And as for the decision tree that leads an algo to conclude if a source’s data can be trusted and thus acted upon, here is the full decision tree. First in summary:

As the decision tree shows, the top features for this task were the following:

  • Topic-based features: the fraction of tweets having an URL is the root of the tree. Sentiment-based features like fraction of negative sentiment or fraction of tweets with an exclamation mark correspond to the following relevant features, very close to the root. In particular we can observe two very simple classification rules, tweets which do not include URLs tend to be related to non-credible news. On the other hand, tweets which include negative sentiment terms are related to credible news. Something similar occurs when people use positive sentiment terms: a low fraction of tweets with positive sentiment terms tend to be related to noncredible news.
  • User-based features: these collection of features is very relevant for this task. Notice that low credible news are mostly propagated by users who have not written many messages in the past. The number of friends is also a feature that is very close to the root.
  • Propagation-based features: the maximum level size of the RT tree is also a relevant feature for this task. Tweets with many re-tweets are related to credible news.

These results show that textual information is very relevant for this task. Opinions or subjective expressions describe people’s sentiments or perceptions about a given topic or event. Opinions are also important for this task that allow to detect the community perception about the credibility of an event. On the other hand, user-based features are indicators of the reputation of the users. Messages propagated trough credible users (active users with a significant number of connections) are seen as highly credible. Thus, those users tend to propagate credible news suggesting that the Twitter community works like a social filter.

And visually:

Get to the very bottom of the tree without spooking too many algos, and you too can have a
Carl Icahn-like impact on the stock of your choosing.

Source: Information Credibility on Twitter


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/e5RNXug43FA/story01.htm Tyler Durden

BitPay is Now Adding 1,000 New Merchants Per Week

Earlier today, the Bitcoin news website Coindesk reported that BitPay is adding 1,000 new merchants per week, within an article highlighting the fact that private jet company PrivateFly had just teamed up with the payment processor to accept BTC for its charter flights.

Just to put this into perspective and understand just how staggering this growth it, BitPay only first surpassed 1,000 total merchants in September 2012 and a total of 10,000 in September 2013. At its current growth rate, the company is set to double the milestone of 10,000 merchants every two and a half months. Incredible.

From Coindesk:

“We believe that merchants are starting to see the value that accepting bitcoin can bring to their business,” said BitPay’s Jan Jahosky. “We’re adding merchants at a pace of 1,000 new merchants per week.”

“We expect exponential growth in the popularity of bitcoin around the world with both merchants and consumers, and anticipate seeing the biggest growth in China, India, Russia and South America.”

Full article here.

In Liberty,
Mike

Like this post?
Donate bitcoins: 1LefuVV2eCnW9VKjJGJzgZWa9vHg7Rc3r1

 Follow me on Twitter.

BitPay is Now Adding 1,000 New Merchants Per Week originally appeared on A Lightning War for Liberty on January 11, 2014.

continue reading

from A Lightning War for Liberty http://libertyblitzkrieg.com/2014/01/11/bitpay-is-now-adding-1000-new-merchants-per-week/
via IFTTT

Obamacare “Approval” Drops To Record Low

For the current administration, now with a fresh developer to fix all the problems (with the website), the reality of public perception over Obamacare has gone from worst to worster-er this week. As Gallup polls show, nearly half of Americans say the Affordable Care Act will make the healthcare situation in the U.S. worse in the long run.

 

 

When asked more broadly if they approve or disapprove of Obamacare, Americans come down on the disapprove side by 54% to 38% – a new record low for ‘approval’.

 

So despite the full court press marketing of this great new must-have product – and in light of the fact that the ‘risk-pool’ looks to be disastrous, things are not improving at all.

Perhaps not surprisingly though, Gallup concludes,

…remarkably, there has been little fundamental change in most of these attitudes over the past year or two — and especially in recent months, despite the highly contentious and visible introduction of the ACA’s major features. Americans’ views of the healthcare law seem to be fairly well established, and largely rooted in partisan politics.

Of course, we look forward to the next month as bills come due and people realize that “affordable” means something different than they were promised (i.e. not free)…


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/hFedxC9rldY/story01.htm Tyler Durden

Obamacare "Approval" Drops To Record Low

For the current administration, now with a fresh developer to fix all the problems (with the website), the reality of public perception over Obamacare has gone from worst to worster-er this week. As Gallup polls show, nearly half of Americans say the Affordable Care Act will make the healthcare situation in the U.S. worse in the long run.

 

 

When asked more broadly if they approve or disapprove of Obamacare, Americans come down on the disapprove side by 54% to 38% – a new record low for ‘approval’.

 

So despite the full court press marketing of this great new must-have product – and in light of the fact that the ‘risk-pool’ looks to be disastrous, things are not improving at all.

Perhaps not surprisingly though, Gallup concludes,

…remarkably, there has been little fundamental change in most of these attitudes over the past year or two — and especially in recent months, despite the highly contentious and visible introduction of the ACA’s major features. Americans’ views of the healthcare law seem to be fairly well established, and largely rooted in partisan politics.

Of course, we look forward to the next month as bills come due and people realize that “affordable” means something different than they were promised (i.e. not free)…


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/hFedxC9rldY/story01.htm Tyler Durden