World
Top Store-Brand Herbal Supplements Missing the Herb

US State officials demand Walmart, Target, Walgreens and GNC stop selling the supplements

 

If you buy store-brand herbal supplements from Walmart, Target, Walgreens or GNC, you may not be paying for the herb advertised on the label. Worse yet, the bottle you take home may contain hidden and potentially harmful allergens.
 
An investigation into the retailers’ labeling of such store-brand supplements as gingko biloba, ginseng and St. John’s wort has found that only 21 percent contain DNA from the herb on the label and that many contain undeclared ingredients such as asparagus, rice, primrose and wheat.
 
The New York State attorney general’s office spearheaded the investigation and has sent cease and desist letters to the four national retailers demanding that they immediately stop selling several of their dietary supplements.
 
“This investigation makes one thing abundantly clear: the old adage ‘buyer beware’ may be especially true for consumers of herbal supplements,” said New York Attorney General Eric Schneiderman. “The DNA test results seem to confirm long-standing questions about the herbal industry. Mislabeling, contamination, and false advertising are illegal.”
 
Six supplements were tested from each of the four retailers. In total, seven different products were placed under the microscope: gingko biloba, ginseng, St. John’s wort, garlic, echinacea, saw palmetto and valerian root. Of the four retailers, Walmart had the worst showing, with only 4 percent of its supplements having actual DNA from the plants listed on the label.
 
In separate statements to TINA.org, Walmart, Walgreens and Target each said they intend to comply with their respective cease and desist letter and are in the process of removing the supplements from their shelves. GNC did not return a request for comment on Tuesday.
 
Dietary supplements are not held to the same pre-market regulatory standards as FDA-approved drugs. But that hasn’t stopped the dietary supplement industry from making billions off consumers. For more of our coverage on the industry, click here. 
 
 

User

Uber Claims Credit for Drop in Drunk Driving Accidents. But Where's the Evidence?

The ridesharing service published a report last week with Mothers Against Drunk Driving connecting the rise of Uber to a drop in drunk driving accidents. Except the connection isn't so clear

 

Last week Uber revealed another way the ridesharing service is revolutionizing travel: Cities that use Uber see a reduction in drunk driving accidents among young people, a company report showed. 
 
"When empowered with more transportation options like Uber, people are making better choices that save lives," the company declared.
 
David Plouffe – President Obama's former campaign manager who is now filling the same role for Uber – emailed millions of users to share the astounding news. "Since we launched uberX in California, drunk-driving crashes decreased by 60 per month for drivers under 30," Plouffe wrote. "That's 1,800 crashes likely prevented over the past 2
½ years."
 
What is Uber's evidence that they "likely prevented" so many crashes?
 
Not much. 
 
Indeed, Mothers Against Drunk Driving, which co-authored the report, cautioned us against connecting the rise of Uber to a drop in drunk driving. "Nobody is saying that there is a causation relationship here, this is a correlation relationship. Purely correlational," said Amy George, senior vice president of marketing and communications for MADD. (MADD took a less cautious stance in a press release last week: New Report from MADD, Uber Reveals Ridesharing Services Important Innovation to Reduce Drunk Driving.)
 
Uber's report has two key graphics: The first shows alcohol-involved crashes in California markets where Uber operates. The second shows the same, but in cities where there is no
Uber service. Each graph compares accidents between under-30 and 30-and-over drivers. The charts actually show, in general, a downward trend of drunk driving accidents in both Uber and non-Uber markets.
 
But Uber and Plouffe are hanging their assertion on another facet of the analysis: drunk driving crashes for those under 30 have dropped more in cities that have Uber versus those that don't.
 
California: Alcohol-Related Crashes in Markets Where Uber Operates
 
 
California: Alcohol-Related Crashes in Markets Where Uber Does Not Operate

 
 
"We believe there is a direct relationship between the presence of uberX (Uber's lowest-cost option) in a city and the amount of drunk driving crashes involving younger populations," the report says.
 
That could be. But we don't really know, and neither does Uber.
 
Uber does not provide evidence in its report that Uber users and those under 30 are the same population. A methodology shared with us by Uber asserts that their users are generally younger and more technologically savvy. MADD's George said they sent the data analysis to an outside research group for extra vetting. She declined to name the group because they were not formally part of the report.
 
Michael Amodeo, an Uber spokesperson, sent us a statement in response to questions about the analysis:
 
"We believe the results of the study are an encouraging step in the right direction and provide evidence that ridesharing services like Uber are making a meaningful and positive impact on mindsets and the rate of drunk driving. We attempt to deal with other factors in our study by breaking out the under 30 and over 30 groups, and we're comparing them against each other."
 
Uber's report credits an analysis by Nate Good, who is chief technology officer for an online ticketing company as well as an amateur statistician and self-described ridesharing proponent. Uber's report reads: "Inspired by Nate Good's analysis—which demonstrated a clear downward trend in alcohol-related crashes in Pennsylvania's youngest cohort once ridesharing was available—we decided to replicate that study in California at large using data procured from the State."
 
However, Good's study had nothing to do with "alcohol-related crashes." Good analyzed DUI arrests. "That was a poor choice of words on Uber's part," Good told us.
 
Good was careful to note various caveats of his analysis. No 1 on his list: "Correlation does not equate to causation." No. 2: "I am a computer science professional and a data science enthusiast, but by no means a statistician."
 
Good said he attempted to analyze alcohol-involved crash data but could not find a reliable data source.
 
We've also reached out to Plouffe, but haven't heard back yet.
 
Courtesy: ProPublica.org
 

User

MLF Seminar: Can Opinion Polls Predict Election Outcomes?
“Opinion polls can be fairly accurate because robust statistical methodologies support them,” said leading mathematician Dr Rajeeva L Karandikar
 
Can a sample size of, say, 20,000 voters be sufficient to predict the outcome in a country with over 700 million voters? This is what Dr Rajeeva L Karandikar, director, Chennai Mathematical Institute, who has nearly 20 years of experience in statistical study of elections and trends in voting, explained at a Moneylife Foundation event sponsored by BARC India. This was followed by a highly interactive discussion with Paritosh Joshi, member of the technical committee for the Broadcast Audience Research Council India and the Media Research Users’ Council.
 
Using simple math and statistics, lots of common sense and a good understanding of the ground reality, can yield a very good forecast or predictions about the outcome of elections based on opinion polls and exit polls. These ingredients, along with domain knowledge, are what go into psephology, explained Dr Karandikar, who has a success rate of 85% in his predictions. He repeatedly emphasised that, if ethically done by following robust statistical methodology, opinion polls can be fairly accurate. But since the results are based on probability, there is always the chance that they could, in some situations, be off the mark.
 
“The media hypes up poll projections as the truth, the whole truth and nothing but the truth,” he said. “But, in reality,” he continued, “polls should be seen as an indication of who is likely to win; will anyone get majority and so on. And it also gives a deeper insight into why people voted the way they did.”
 
Dr Karandikar started his discussion by talking about the scientific basis of opinion polls, their power as well as limitations. The primary aspect of an opinion poll is the sample used to get a truly representative sample in a country with a voting population as large as India’s. He demonstrated an experiment to the audience using probability theory and said that opinion polls use a basic probability calculation to estimate the likelihood of voters’ preferences based on which they can predict who will win in an election.
 
A sample needs to be of the right size, not very large, irrespective of the total population of voters. This held good, even if the sample size was 4,000 irrespective of whether a constituency had 100,000 voters or 2 million voters. Sampling, if properly done, has the power of determining the winner with 99% probability, said Dr Karandikar. Random sampling is a must to remove any bias. Failure to select a random sample can lead to wrong conclusions.
 
Dr Karandikar’s detailed presentation on opinion polls was followed by a very lively discussion led by Mr Joshi. In the course of the discussion, when asked about the efficiency of the sample size and data collected. Dr Karandikar pointed out the limitations posed by resources to conduct large sample surveys in a country of India’s size. There are costs involved and, therefore, increasing the sample size would end up being prohibitive in terms of costs. There was also the problem of trained and reliable manpower to conduct the surveys. Hence, they try to achieve the minimum possible sample size to get an effective result.
 
He also mentioned that the pre-election polls have a low predictive power due to volatility of opinion, not all respondents may vote and some may hide the truth as well. “Exit polls were devised to correct these effects: the gap between the opinion poll and date of voting and also the fact that only between 50% and 70% voters actually vote,” he explained. In India, “leaders change parties and parties change alliances, leading to instability in voter preferences,” he pointed out. Referring to previous surveys, he said that voters change their preferences across dates, and staggered polling dates can also affect voter preferences.
 
Anand Halve, brand strategist and co-founder of chlorophyll, asked whether people accurately respond to sensitive data. Dr Karandikar explained how questionnaires are prepared and other statistical techniques used to resolve issues such as replying to sensitive questions.
 
Dr Karandikar explained in detail how random samples are picked using the master data of electoral rolls. In cases where data cannot be collected, the sample size would be small compared to the universe. Further, Mr Joshi wondered about the ethical use of opinion polls and how they can affect the actual election outcome. Several questions are raised about the integrity of opinion polls. Last year, in a sting operation, a number of opinion polling agencies approached by undercover reporters agreed to manipulate poll data. Dr Karandikar agreed that opinion polls can be manipulated; therefore, he emphasised the need for an audit system to verify and authenticate the polls conducted. This event was supported by the BARC India. 

User

We are listening!

Solve the equation and enter in the Captcha field.
  Loading...
Close

To continue


Please
Sign Up or Sign In
with

Email
Close

To continue


Please
Sign Up or Sign In
with

Email

BUY NOW

The Scam
24 Year Of The Scam: The Perennial Bestseller, reads like a Thriller!
Moneylife Magazine
Fiercely independent and pro-consumer information on personal finance
Stockletters in 3 Flavours
Outstanding research that beats mutual funds year after year
MAS: Complete Online Financial Advisory
(Includes Moneylife Magazine and Lion Stockletter)