Overstock’s First Day Of Bitcoin: $130,000 Sales, 840 Transactions, CEO “Stunned”

Submitted by Michael Krieger of Liberty Blitzkrieg blog,

Upon the conclusion of the Senate hearing on Bitcoin this past November, I tweeted that I thought we had entered Phase 2 of the Bitcoin story. A month later, following news that Andreessen Horowitz had led an venture capital investment of $25 million in Coinbase, I wrote:

As I tweeted at the time, I think Bitcoin began phase two of its growth and adoption cycle upon the conclusion of the Senate hearings last month (I suggest reading: My Thoughts on the Bitcoin Hearing).

 

I think phase two will be primarily characterized by two things. More mainstream adoption and ease of use, as well as increasingly large investments by venture capitalists. In the past 24 hours, we have seen evidence of both.

Phase 2 so far is going even more positively than I had expected. Overstock.com accelerated its plans to accept BTC by many months, and the early rollout has been a massive success. The company’s CEO just tweeted:

 

 

This is absolutely huge news and any retail CEO worth their salt will immediately begin to look into Bitcoin adoption.

I hope financial publications that missed the biggest financial story of 2013 continue to mock it with covers of unicorns and waterfalls. It’s the most bullish thing I can imagine.

Furthermore, the purchased items are varied…

The apparent ease of acceptance and use has spurred demand for Bitcoin itself which has pushed back above $1000…


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/wNchFC5m30c/story01.htm Tyler Durden

Overstock's First Day Of Bitcoin: $130,000 Sales, 840 Transactions, CEO "Stunned"

Submitted by Michael Krieger of Liberty Blitzkrieg blog,

Upon the conclusion of the Senate hearing on Bitcoin this past November, I tweeted that I thought we had entered Phase 2 of the Bitcoin story. A month later, following news that Andreessen Horowitz had led an venture capital investment of $25 million in Coinbase, I wrote:

As I tweeted at the time, I think Bitcoin began phase two of its growth and adoption cycle upon the conclusion of the Senate hearings last month (I suggest reading: My Thoughts on the Bitcoin Hearing).

 

I think phase two will be primarily characterized by two things. More mainstream adoption and ease of use, as well as increasingly large investments by venture capitalists. In the past 24 hours, we have seen evidence of both.

Phase 2 so far is going even more positively than I had expected. Overstock.com accelerated its plans to accept BTC by many months, and the early rollout has been a massive success. The company’s CEO just tweeted:

 

 

This is absolutely huge news and any retail CEO worth their salt will immediately begin to look into Bitcoin adoption.

I hope financial publications that missed the biggest financial story of 2013 continue to mock it with covers of unicorns and waterfalls. It’s the most bullish thing I can imagine.

Furthermore, the purchased items are varied…

The apparent ease of acceptance and use has spurred demand for Bitcoin itself which has pushed back above $1000…


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/wNchFC5m30c/story01.htm Tyler Durden

How Twitter Algos Determine Who Is Market-Moving And Who Isn’t

Now that even Bridgewater has joined the Twitter craze and is using user-generated content for real-time economic modelling, and who knows what else, the scramble to determine who has the most market-moving, and actionable, Twitter stream is on. Because with HFT algos having camped out at all the usual newswire sources: Bloomberg, Reuters, Dow Jones, etc. the scramble to find a “content edge” for market moving information has never been higher. However, that opens up a far trickier question: whose information on the fastest growing social network, one which many say may surpass Bloomberg in terms of news propagation and functionality, is credible and by implication: whose is not? Indeed, that is the $64K question. Luckily, there is an algo for that.

In a note by Castillo et al from Yahoo Research in Spain and Chile, the authors focus on automatic methods for assessing the credibility of a given set of tweets. Specifically, they analyze microblog postings related to “trending” topics, and classify them as credible or not credible, based on features extracted from them. Our results shows that there are measurable differences in the way messages propagate, that can be used to classify them automatically as credible or not credible, with precision and recall in the range of 70% to 80%.

Needless to say, the topic of social media credibility is a critical one, in part due to the voluntary anonymity of the majority of sources , the frequent error rate of named sources, the painfully subjective attributes involved in determining good and bad information, and one where discerning the credible sources has become a very lucrative business. Further from the authors:

In a recent user study, it was found that providing information to users about the estimated credibility of online content was very useful and valuable to them. In absence of this external information, perceptions of credibility online are strongly influenced by style-related attributes, including visual design, which are not directly related to the content itself. Users also may change their perception of credibility of a blog posting depending on the (supposed) gender of the author. In this light the results of the experiment described are not surprising. In the experiment, the headline of a news item was presented to users in different ways, i.e. as posted in a traditional media website, as a blog, and as a post on Twitter. Users found the same news headline significantly less credible when presented on Twitter.

 

This distrust may not be completely ungrounded. Major search engines are starting to prominently display search results from the “real-time web” (blog and microblog postings), particularly for trending topics. This has attracted spammers that use Twitter to attract visitors to (typically) web pages offering products or services. It has also increased the potential impact of orchestrated attacks that spread lies and misinformation. Twitter is currently being used as a tool for political propaganda. Misinformation can also be spread unwillingly. For instance, on November 2010 the Twitter account of the presidential adviser for disaster management of Indonesia was hacked. The hacker then used the account to post a false tsunami warning. On January 2011 rumors of a shooting in the Oxford Circus in London, spread rapidly through Twitter. A large collection of screenshots of those tweets can be found online.

 

Recently, the Truthy service from researchers at Indiana University, has started to collect, analyze and visualize the spread of tweets belonging to “trending topics”. Features collected from the tweets are used to compute a truthiness score for a set of tweets. Those sets with low truthiness score are more likely to be part of a campaign to deceive users. Instead, in our work we do not focus specifically on detecting willful deception, but look for factors that can be used to automatically approximate users’ perceptions of credibility.

The study’s conclusion: “we have shown that for messages about time-sensitive topics, we can separate automatically newsworthy topics from other types of conversations. Among several other features, newsworthy topics tend to include URLs and to have deep propagation trees. We also show that we can assess automatically the level of social media credibility of newsworthy topics. Among several other features, credible news are propagated through authors that have previously written a large number of messages, originate at a single or a few users in the network, and have many re-posts.”

All of the above is largely known. What isn’t, however, is the mostly generic matrix used by various electronic and algorithmic sources to determine who is real and who isn’t, and thus who is market moving and who, well, ins’t. Once again, courtesy of Castillo, one can determine how the filtering algo operates, (and thus reverse engineer it). So without further ado, here is the set of features used by Twitter truth-seekers everywhere.

Those are the variables. And as for the decision tree that leads an algo to conclude if a source’s data can be trusted and thus acted upon, here is the full decision tree. First in summary:

As the decision tree shows, the top features for this task were the following:

  • Topic-based features: the fraction of tweets having an URL is the root of the tree. Sentiment-based features like fraction of negative sentiment or fraction of tweets with an exclamation mark correspond to the following relevant features, very close to the root. In particular we can observe two very simple classification rules, tweets which do not include URLs tend to be related to non-credible news. On the other hand, tweets which include negative sentiment terms are related to credible news. Something similar occurs when people use positive sentiment terms: a low fraction of tweets with positive sentiment terms tend to be related to noncredible news.
  • User-based features: these collection of features is very relevant for this task. Notice that low credible news are mostly propagated by users who have not written many messages in the past. The number of friends is also a feature that is very close to the root.
  • Propagation-based features: the maximum level size of the RT tree is also a relevant feature for this task. Tweets with many re-tweets are related to credible news.

These results show that textual information is very relevant for this task. Opinions or subjective expressions describe people’s sentiments or perceptions about a given topic or event. Opinions are also important for this task that allow to detect the community perception about the credibility of an event. On the other hand, user-based features are indicators of the reputation of the users. Messages propagated trough credible users (active users with a significant number of connections) are seen as highly credible. Thus, those users tend to propagate credible news suggesting that the Twitter community works like a social filter.

And visually:

Get to the very bottom of the tree without spooking too many algos, and you too can have a Carl Icahn-like impact on the stock of your choosing.

Source: Information Credibility on Twitter


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/e5RNXug43FA/story01.htm Tyler Durden

How Twitter Algos Determine Who Is Market-Moving And Who Isn't

Now that even Bridgewater has joined the Twitter craze and is using user-generated content for real-time economic modelling, and who knows what else, the scramble to determine who has the most market-moving, and actionable, Twitter stream is on. Because with HFT algos having camped out at all the usual newswire sources: Bloomberg, Reuters, Dow Jones, etc. the scramble to find a “content edge” for market moving information has never been higher. However, that opens up a far trickier question: whose information on the fastest growing social network, one which many say may surpass Bloomberg in terms of news propagation and functionality, is credible and by implication: whose is not? Indeed, that is the $64K question. Luckily, there is an algo for that.

In a note by Castillo et al from Yahoo Research in Spain and Chile, the authors focus on automatic methods for assessing the credibility of a given set of tweets. Specifically, they analyze microblog postings related to “trending” topics, and classify them as credible or not credible, based on features extracted from them. Our results shows that there are measurable differences in the way messages propagate, that can be used to classify them automatically as credible or not credible, with precision and recall in the range of 70% to 80%.

Needless to say, the topic of social media credibility is a critical one, in part due to the voluntary anonymity of the majority of sources , the frequent error rate of named sources, the painfully subjective attributes involved in determining good and bad information, and one where discerning the credible sources has become a very lucrative business. Further from the authors:

In a recent user study, it was found that providing information to users about the estimated credibility of online content was very useful and valuable to them. In absence of this external information, perceptions of credibility online are strongly influenced by style-related attributes, including visual design, which are not directly related to the content itself. Users also may change their perception of credibility of a blog posting depending on the (supposed) gender of the author. In this light the results of the experiment described are not surprising. In the experiment, the headline of a news item was presented to users in different ways, i.e. as posted in a traditional media website, as a blog, and as a post on Twitter. Users found the same news headline significantly less credible when presented on Twitter.

 

This distrust may not be completely ungrounded. Major search engines are starting to prominently display search results from the “real-time web” (blog and microblog postings), particularly for trending topics. This has attracted spammers that use Twitter to attract visitors to (typically) web pages offering products or services. It has also increased the potential impact of orchestrated attacks that spread lies and misinformation. Twitter is currently being used as a tool for political propaganda. Misinformation can also be spread unwillingly. For instance, on November 2010 the Twitter account of the presidential adviser for disaster management of Indonesia was hacked. The hacker then used the account to post a false tsunami warning. On January 2011 rumors of a shooting in the Oxford Circus in London, spread rapidly through Twitter. A large collection of screenshots of those tweets can be found online.

 

Recently, the Truthy service from researchers at Indiana University, has started to collect, analyze and visualize the spread of tweets belonging to “trending topics”. Features collected from the tweets are used to compute a truthiness score for a set of tweets. Those sets with low truthiness score are more likely to be part of a campaign to deceive users. Instead, in our work we do not focus specifically on detecting willful deception, but look for factors that can be used to automatically approximate users’ perceptions of credibility.

The study’s conclusion: “we have shown that for messages about time-sensitive topics, we can separate automatically newsworthy topics from other types of conversations. Among several other features, newsworthy topics tend to include URLs and to have deep propagation trees. We also show that we can assess automatically the level of social media credibility of newsworthy topics. Among several other features, credible news are propagated through authors that have previously written a large number of messages, originate at a single or a few users in the network, and have many re-posts.”

All of the above is largely known. What isn’t, however, is the mostly generic matrix used by various electronic and algorithmic sources to determine who is real and who isn’t, and thus who is market moving and who, well, ins’t. Once again, courtesy of Castillo, one can determine how the filtering algo operates, (and thus reverse engineer it). So without further ado, here is the set of features used by Twitter truth-seekers everywhere.

Those are the variables. And as for the decision tree that leads an algo to conclude if a source’s data can be trusted and thus acted upon, here is the full decision tree. First in summary:

As the decision tree shows, the top features for this task were the following:

  • Topic-based features: the fraction of tweets having an URL is the root of the tree. Sentiment-based features like fraction of negative sentiment or fraction of tweets with an exclamation mark correspond to the following relevant features, very close to the root. In particular we can observe two very simple classification rules, tweets which do not include URLs tend to be related to non-credible news. On the other hand, tweets which include negative sentiment terms are related to credible news. Something similar occurs when people use positive sentiment terms: a low fraction of tweets with positive sentiment terms tend to be related to noncredible news.
  • User-based features: these collection of features is very relevant for this task. Notice that low credible news are mostly propagated by users who have not written many messages in the past. The number of friends is also a feature that is very close to the root.
  • Propagation-based features: the maximum level size of the RT tree is also a relevant feature for this task. Tweets with many re-tweets are related to credible news.

These results show that textual information is very relevant for this task. Opinions or subjective expressions describe people’s sentiments or perceptions about a given topic or event. Opinions are also important for this task that allow to detect the community perception about the credibility of an event. On the other hand, user-based features are indicators of the reputation of the users. Messages propagated trough credible users (active users with a significant number of connections) are seen as highly credible. Thus, those users tend to propagate credible news suggesting that the Twitter community works like a social filter.

And visually:

Get to the very bottom of the tree without spooking too many algos, and you too can have a
Carl Icahn-like impact on the stock of your choosing.

Source: Information Credibility on Twitter


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/e5RNXug43FA/story01.htm Tyler Durden

BitPay is Now Adding 1,000 New Merchants Per Week

Earlier today, the Bitcoin news website Coindesk reported that BitPay is adding 1,000 new merchants per week, within an article highlighting the fact that private jet company PrivateFly had just teamed up with the payment processor to accept BTC for its charter flights.

Just to put this into perspective and understand just how staggering this growth it, BitPay only first surpassed 1,000 total merchants in September 2012 and a total of 10,000 in September 2013. At its current growth rate, the company is set to double the milestone of 10,000 merchants every two and a half months. Incredible.

From Coindesk:

“We believe that merchants are starting to see the value that accepting bitcoin can bring to their business,” said BitPay’s Jan Jahosky. “We’re adding merchants at a pace of 1,000 new merchants per week.”

“We expect exponential growth in the popularity of bitcoin around the world with both merchants and consumers, and anticipate seeing the biggest growth in China, India, Russia and South America.”

Full article here.

In Liberty,
Mike

Like this post?
Donate bitcoins: 1LefuVV2eCnW9VKjJGJzgZWa9vHg7Rc3r1

 Follow me on Twitter.

BitPay is Now Adding 1,000 New Merchants Per Week originally appeared on A Lightning War for Liberty on January 11, 2014.

continue reading

from A Lightning War for Liberty http://libertyblitzkrieg.com/2014/01/11/bitpay-is-now-adding-1000-new-merchants-per-week/
via IFTTT

Obamacare “Approval” Drops To Record Low

For the current administration, now with a fresh developer to fix all the problems (with the website), the reality of public perception over Obamacare has gone from worst to worster-er this week. As Gallup polls show, nearly half of Americans say the Affordable Care Act will make the healthcare situation in the U.S. worse in the long run.

 

 

When asked more broadly if they approve or disapprove of Obamacare, Americans come down on the disapprove side by 54% to 38% – a new record low for ‘approval’.

 

So despite the full court press marketing of this great new must-have product – and in light of the fact that the ‘risk-pool’ looks to be disastrous, things are not improving at all.

Perhaps not surprisingly though, Gallup concludes,

…remarkably, there has been little fundamental change in most of these attitudes over the past year or two — and especially in recent months, despite the highly contentious and visible introduction of the ACA’s major features. Americans’ views of the healthcare law seem to be fairly well established, and largely rooted in partisan politics.

Of course, we look forward to the next month as bills come due and people realize that “affordable” means something different than they were promised (i.e. not free)…


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/hFedxC9rldY/story01.htm Tyler Durden

Obamacare "Approval" Drops To Record Low

For the current administration, now with a fresh developer to fix all the problems (with the website), the reality of public perception over Obamacare has gone from worst to worster-er this week. As Gallup polls show, nearly half of Americans say the Affordable Care Act will make the healthcare situation in the U.S. worse in the long run.

 

 

When asked more broadly if they approve or disapprove of Obamacare, Americans come down on the disapprove side by 54% to 38% – a new record low for ‘approval’.

 

So despite the full court press marketing of this great new must-have product – and in light of the fact that the ‘risk-pool’ looks to be disastrous, things are not improving at all.

Perhaps not surprisingly though, Gallup concludes,

…remarkably, there has been little fundamental change in most of these attitudes over the past year or two — and especially in recent months, despite the highly contentious and visible introduction of the ACA’s major features. Americans’ views of the healthcare law seem to be fairly well established, and largely rooted in partisan politics.

Of course, we look forward to the next month as bills come due and people realize that “affordable” means something different than they were promised (i.e. not free)…


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/hFedxC9rldY/story01.htm Tyler Durden

Relax Or You Will Be Fired

It's not just Bank of America that is 'worried' about the health of its junior employees. As Bloomberg BusinessWeek reports, Americans eat at their desks, work longer days, and retire later than their counterparts in most other parts of the world (especially France). But, while the likes of Oprah offer holistic solutions to harness yopur stress, and bosses insist that you take your weekends off (or else), the workload itself is not reduced. Do not fear though as Senator Glenn Grotham is pressing to undo a "goofy" law requiring employers to give workers a day off – "all sorts of people want to work 7 days-a-week," he noted… indeed they do Senator.

 

As we noted yesterday, Bloomberg Businessweek reports,

Whoa! Hold on there, young banker. Your boss wants you to kick back and stop working so hard. That’s the message from Bank of America (BAC), which just issued a memo advising its analysts and associates to “take a minimum of four weekend days off per month.” (Senior executives presumably know what’s good for them and are welcome to work around the clock.)

This growing pressure to relax isn’t limited to junior bankers trying to get ahead.

In fact, a major source of stress on U.S. workers right now is the onslaught of data about the costs of being so stressed and sleep-deprived. You’re more likely to crash your car, drink too much, blow up in a meeting, divorce your spouse, and fall prey to everything from a cold to a heart attack. Just being around a stressed person, so-called secondhand stress, can leave you feeling more stressed. For most Americans the main source of that stress isn’t their finances or love life or lousy neighbors. It’s their job. More specifically, it’s the workload from their job. (The kind of workload that prompts a Bank of America associate to, say, work over the weekend.)

The challenge is finding a fix that fits into a busy schedule.

While TV pundits love to cast the U.S. as a nation of takers, the reality is that Americans are a pretty hardworking bunch.

 

They used only 10 of their 14 vacation days in 2013, according to Expedia (EXPE). French workers, in contrast, received and used all 30 of their days. (To be sure, Americans beat the French when it comes to actually working, with unemployment standing at 6.7 percent and 10.9 percent, respectively.) Americans eat at their desks, work longer days, and retire later than their counterparts in most other parts of the world, which might also explain why they’re acknowledged masters in the realm of fast food.

 

Thus comes the slew of U.S.-style solutions, from Oprah Winfrey’s online Meditation Master Trilogy to Arianna Huffington’s Third Metric tour aimed at tackling “sleep deprivation, burnout, and driving ourselves into the ground.” Harvard Business Review has a guide that promises to “harness stress so that it spurs your productivity.”

 

There are meditation goggles, Ambien, sleep coaches, and power naps. And, of course, there are the missives that remind eager workers to go home on weekends.

That may prove to be a hard sell if you belong to a generation that’s experienced 70 consecutive months of double-digit unemployment, often with daunting college debts to pay off.

If so, take comfort in knowing elders like State Senator Glenn Grothman (R-Wis.) are on your side.

 

He’s fighting to undo a “goofy” state law that requires employers to give workers a day off, saying “all sorts of people want to work seven days a week.” You betcha

 

If the boss insists you take more weekends off without reducing the workload that got you there in the first place, that’s OK.

 

The minute you hit midnight on Sunday, the workweek can start again.

So the message is clear – Relax! or you're fired… and make sure all your work is done… or you're fired!


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/D4_xhxfoZrQ/story01.htm Tyler Durden

Did Goldman Just Kill The Music? – “The S&P500 Is Now Overvalued By Almost Any Measure”

“As long as the music is playing, you’ve got to get up and dance…. We’re still dancing.”

      – Chuck Prince, July 2007

Late last night the music may have just skipped a major beat after Goldman released a Friday evening note that is perhaps the most bearish thing to come out of Goldman’s chief strategist David Kostin in over a year, (and who incidentally just repeated what we said most recently a week ago in “Stocks Are More Expensive Now Than At Their 2007 Peak“). To wit:

S&P 500 valuation is lofty by almost any measure, both for the aggregate market (15.9x) as well as the median stock (16.8x). We believe S&P 500 trades close to fair value and the forward path will depend on profit growth rather than P/E expansion. However, many clients argue that the P/E multiple will continue to rise in 2014 with 17x or 18x often cited, with some investors arguing for 20x. We explore valuation using various approaches. We conclude that further P/E expansion will be difficult to achieve. Of course, it is possible. It is just not probable based on history. 

 

The current valuation of the S&P 500 is lofty by almost any measure, both for the aggregate market as well as the median stock: (1) The P/E ratio; (2) the current P/E expansion cycle; (3) EV/Sales; (4) EV/EBITDA; (5) Free Cash Flow yield; (6) Price/Book as well as the ROE and P/B relationship; and compared with the levels of (6) inflation; (7)
nominal 10-year Treasury yields; and (8) real interest rates. Furthermore, the cyclically-adjusted P/E ratio suggests the S&P 500 is currently 30% overvalued in terms of (9) Operating EPS and (10) about 45% overvalued using As Reported earnings.

Cue David Tepper to bring out even bigger greater fools who do believe in his 20x PE multiple “thesis.” Cause if 20x works, why not 40x, or 60x, or moar?

* * *

Kostin’s full “market is now overvalued” note:

We believe S&P 500 currently trades close to fair value and the forward path of the market will depend on the trajectory of profits rather than further expansion of the forward P/E multiple from the current 15.9x. We forecast a modest price gain of roughly 3% to our year-end 2014 target of 1900. We expect S&P 500 will climb to 2100 by the end of 2015 and reach 2200 by the end of 2016 representing a gain of 20% over the next three years.

However, many clients argue that the multiple will continue to expand in 2014 leading to another year of strong US equity returns. A forward multiple of 17x or 18x is often cited, with others suggesting 20x is reasonable given the strengthening US economy and low interest rates. Many on the buy-side have year-end 2014 targets between 2000 and 2200 reflecting a price gain of 9% to 20%, well above our more modest projection.

The current valuation of the S&P 500 is lofty by almost any measure, both for the aggregate market as well as the median stock: (1) The P/E ratio; (2) the current P/E expansion cycle; (3) EV/Sales; (4) EV/EBITDA; (5) Free Cash Flow yield; (6) Price/Book as well as the ROE and P/B relationship; and compared with the levels of (6) inflation; (7) nominal 10-year Treasury yields; and (8) real interest rates. Furthermore, the cyclically-adjusted P/E ratio suggests the S&P 500 is currently 30% overvalued in terms of (9) Operating EPS and (10) about 45% overvalued using As Reported earnings.

Reflecting on our recent client visits and conversations, the biggest surprise is how many investors expect the forward P/E multiple to expand to 17x or 18x. For some reason, many market participants believe the P/E multiple has a long-term average of 15x and therefore expansion to 17-18x seems reasonable. But the common perception is wrong. The forward P/E ratio for the S&P 500 during the past 5-year, 10-year, and 35- year periods has averaged 13.2x, 14.1x, and 13.0x, respectively. At 15.9x, the current aggregate forward P/E multiple is high by historical standards.

Most investors are surprised to learn that since 1976 the S&P 500 P/E multiple has only exceeded 17x during the 1997-2000 Tech Bubble and a brief four-month period in 2003-04 (see Exhibit 1). Other than those two episodes, the US stock market has never traded at a P/E of 17x or above.

A graph of the historical distribution of P/E ratios clearly highlights that outside of the Tech Bubble, the market has only rarely (5% of the time) traded at the current forward multiple of 16x (see Exhibit 2).

The elevated market multiple is even more apparent when viewed on a median basis. At 16.8x, the current multiple is at the high end of its historical distribution (see Exhibit 3).

The multiple expansion cycle provides another lens through which we view equity valuation. There have been nine multiple expansion cycles during the past 30 years. The P/E troughed at a median value of 10.5x and peaked at a median value of 15.0x, an increase of roughly 50%. The current expansion cycle began in September 2011 when the market traded at 10.6x forward EPS and it currently trades at 15.9x, an expansion of 50%. However, during most (7 of the 9) of the cycles the backdrop included falling bond yields and declining inflation. In contrast, bond yields are now increasing and inflation is low but expected to rise.

We addressed equity valuation using the Fed model and interest rate sensitivity in our December 6th US Weekly Kickstart. Simply put, the earnings yield gap between the S&P 500 and ten-year Treasury yields currently equals about 325 bp. Goldman Sachs Economics forecasts bond yields will creep higher to 3.25% by year-end 2014, a rise of just 25 bp. If the earnings yield gap remains unchanged, then the ‘fair value’ multiple according to the Fed model would be 15.2x at year-end 2014. The implied index level would be 1900 assuming our 2015 EPS forecast of $125. However, bond yields could rise by more than we expect and hit 3.75% while the yield gap could narrow to perhaps 275 bp. The resulting EPS yield of 6.5% represents a forward P/E of 15.4x implying a S&P 500 level of 1923. Exhibit 4 of the Dec 6th Kickstart shows valuation using various yields and yield gaps.

Incorporating inflation into our valuation analysis suggests S&P 500 is slightly overvalued. When real interest rates have been in the 1%-2% band, the P/E has averaged 15.0x. Nominal rates of 3%-4% have been associated with P/E multiples averaging 14.2x, nearly two points below today. As noted earlier, S&P 500 is overvalued on both an aggregate and median basis on many classic metrics, including EV/EBITDA, FCF, and P/B (see Exhibits 5-8).

 

* * *

Then again, this is Goldman, where dodecatuple reverse psychology in recos is the norm. If Goldman has just gone bearish, it would logically suggest it is very short and is hoping for a crash. But it could also mean it is hoping its clients panic and dump so collapsing trade volumes finally soar and Goldman makes at least some money on commissions, or is waiting for a plunge in stocks so it can put its massive cash hoard to use, or simply planting the seeds of the next Lehman-like event (now over 5 years later), which would serve as the periodic reset of what once used to be a business cycle? We are sure to find out soon because whatever happens, there will be volatility.


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/Im6w4YMlQ2Y/story01.htm Tyler Durden

Did Goldman Just Kill The Music? – "The S&P500 Is Now Overvalued By Almost Any Measure"

“As long as the music is playing, you’ve got to get up and dance…. We’re still dancing.”

      – Chuck Prince, July 2007

Late last night the music may have just skipped a major beat after Goldman released a Friday evening note that is perhaps the most bearish thing to come out of Goldman’s chief strategist David Kostin in over a year, (and who incidentally just repeated what we said most recently a week ago in “Stocks Are More Expensive Now Than At Their 2007 Peak“). To wit:

S&P 500 valuation is lofty by almost any measure, both for the aggregate market (15.9x) as well as the median stock (16.8x). We believe S&P 500 trades close to fair value and the forward path will depend on profit growth rather than P/E expansion. However, many clients argue that the P/E multiple will continue to rise in 2014 with 17x or 18x often cited, with some investors arguing for 20x. We explore valuation using various approaches. We conclude that further P/E expansion will be difficult to achieve. Of course, it is possible. It is just not probable based on history. 

 

The current valuation of the S&P 500 is lofty by almost any measure, both for the aggregate market as well as the median stock: (1) The P/E ratio; (2) the current P/E expansion cycle; (3) EV/Sales; (4) EV/EBITDA; (5) Free Cash Flow yield; (6) Price/Book as well as the ROE and P/B relationship; and compared with the levels of (6) inflation; (7)
nominal 10-year Treasury yields; and (8) real interest rates. Furthermore, the cyclically-adjusted P/E ratio suggests the S&P 500 is currently 30% overvalued in terms of (9) Operating EPS and (10) about 45% overvalued using As Reported earnings.

Cue David Tepper to bring out even bigger greater fools who do believe in his 20x PE multiple “thesis.” Cause if 20x works, why not 40x, or 60x, or moar?

* * *

Kostin’s full “market is now overvalued” note:

We believe S&P 500 currently trades close to fair value and the forward path of the market will depend on the trajectory of profits rather than further expansion of the forward P/E multiple from the current 15.9x. We forecast a modest price gain of roughly 3% to our year-end 2014 target of 1900. We expect S&P 500 will climb to 2100 by the end of 2015 and reach 2200 by the end of 2016 representing a gain of 20% over the next three years.

However, many clients argue that the multiple will continue to expand in 2014 leading to another year of strong US equity returns. A forward multiple of 17x or 18x is often cited, with others suggesting 20x is reasonable given the strengthening US economy and low interest rates. Many on the buy-side have year-end 2014 targets between 2000 and 2200 reflecting a price gain of 9% to 20%, well above our more modest projection.

The current valuation of the S&P 500 is lofty by almost any measure, both for the aggregate market as well as the median stock: (1) The P/E ratio; (2) the current P/E expansion cycle; (3) EV/Sales; (4) EV/EBITDA; (5) Free Cash Flow yield; (6) Price/Book as well as the ROE and P/B relationship; and compared with the levels of (6) inflation; (7) nominal 10-year Treasury yields; and (8) real interest rates. Furthermore, the cyclically-adjusted P/E ratio suggests the S&P 500 is currently 30% overvalued in terms of (9) Operating EPS and (10) about 45% overvalued using As Reported earnings.

Reflecting on our recent client visits and conversations, the biggest surprise is how many investors expect the forward P/E multiple to expand to 17x or 18x. For some reason, many market participants believe the P/E multiple has a long-term average of 15x and therefore expansion to 17-18x seems reasonable. But the common perception is wrong. The forward P/E ratio for the S&P 500 during the past 5-year, 10-year, and 35- year periods has averaged 13.2x, 14.1x, and 13.0x, respectively. At 15.9x, the current aggregate forward P/E multiple is high by historical standards.

Most investors are surprised to learn that since 1976 the S&P 500 P/E multiple has only exceeded 17x during the 1997-2000 Tech Bubble and a brief four-month period in 2003-04 (see Exhibit 1). Other than those two episodes, the US stock market has never traded at a P/E of 17x or above.

A graph of the historical distribution of P/E ratios clearly highlights that outside of the Tech Bubble, the market has only rarely (5% of the time) traded at the current forward multiple of 16x (see Exhibit 2).

The elevated market multiple is even more apparent when viewed on a median basis. At 16.8x, the current multiple is at the high end of its historical distribution (see Exhibit 3).

The multiple expansion cycle provides another lens through which we view equity valuation. There have been nine multiple expansion cycles during the past 30 years. The P/E troughed at a median value of 10.5x and peaked at a median value of 15.0x, an increase of roughly 50%. The current expansion cycle began in September 2011 when the market traded at 10.6x forward EPS and it currently trades at 15.9x, an expansion of 50%. However, during most (7 of the 9) of the cycles the backdrop included falling bond yields and declining inflation. In contrast, bond yields are now increasing and inflation is low but expected to rise.

We addressed equity valuation using the Fed model and interest rate sensitivity in our December 6th US Weekly Kickstart. Simply put, the earnings yield gap between the S&P 500 and ten-year Treasury yields currently equals about 325 bp. Goldman Sachs Economics forecasts bond yields will creep higher to 3.25% by year-end 2014, a rise of just 25 bp. If the earnings yield gap remains unchanged, then the ‘fair value’ multiple according to the Fed model would be 15.2x at year-end 2014. The implied index level would be 1900 assuming our 2015 EPS forecast of $125. However, bond yields could rise by more than we expect and hit 3.75% while the yield gap could narrow to perhaps 275 bp. The resulting EPS yield of 6.5% represents a forward P/E of 15.4x implying a S&P 500 level of 1923. Exhibit 4 of the Dec 6th Kickstart shows valuation using various yields and yield gaps.

Incorporating inflation into our valuation analysis suggests S&P 500 is slightly overvalued. When real interest rates have been in the 1%-2% band, the P/E has averaged 15.0x. Nominal rates of 3%-4% have been associated with P/E multiples averaging 14.2x, nearly two points below today. As noted earlier, S&P 500 is overvalued on both an aggregate and median basis on many classic metrics, including EV/EBITDA, FCF, and P/B (see Exhibits 5-8).

 

* * *

Then again, this is Goldman, where dodecatuple reverse psychology in recos is the norm. If Goldman has just gone bearish, it would logically suggest it is very short and is hoping for a crash. But it could also mean it is hoping its clients panic and dump so collapsing trade volumes finally soar and Goldman makes at least some money on commissions, or is waiting for a plunge in stocks so it can put its massive cash hoard to use, or simply planting the seeds of the next Lehman-like event (now over 5 years later), which would serve as the periodic reset of what once used to be a business cycle? We are sure to find out soon because whatever happens, there will be volatility.


    



via Zero Hedge http://feedproxy.google.com/~r/zerohedge/feed/~3/Im6w4YMlQ2Y/story01.htm Tyler Durden