World
Seoul bans sale of 80 Volkswagen models
The South Korean government on Tuesday revoked the certification and banned the sale of 80 Volkswagen models in the country over the false emission scandal implicating the German car manufacturer.
 
The ban affects 32 different (18 diesel and 14 petrol) vehicles of Volkswagen and Audi, which make up most of the supply of the two brands in South Korea, said Seoul's Ministry of Environment in a statement.
 
The Ministry also fined Volkswagen $16 million as punishment for 47 models that cheated in the environmental tests in South Korea, EFE news reported.
 
Volkswagen, which already anticipated this decision, stopped selling vehicles affected by the scandal last week, including the Volkswagen Golf, Jetta and Tiguan models along with the Audi A3 and A6, in South Korea.
 
Seoul ordered the recall of some 5,800 Audi A4 and A5 vehicles, in some of its TDI versions, marketed in the country since 2014.
 
The certification cancellation affects some 209,000 vehicles, 68 percent of them marketed by Volkswagen since 2007, although this will not affect owners who can continue driving them and even sell them off eventually.
 
The ban comes after an investigation into Volkswagen over allegations that the company obtained vehicle approval of South Korean authorities by reporting false results on the noise level, fuel efficiency and vehicle emissions.
 
Prosecution in February had already raided Volkswagen offices over insufficient data submissions on the recall ordered, after the discovery of manipulation and in mid-July accused an executive of the brand's subsidiary in Seoul for allegedly tampering with data and violating the air quality law.
 
The origin of the case dates back to 2015, when it was discovered that Volkswagen had used fraudulent software to distort vehicle emissions in several countries, including South Korea.
 
The South Korean government fined the German manufacturer $13 million in November 2015 and ordered a recall of 125,000 vehicles in the country.
 
Disclaimer: Information, facts or opinions expressed in this news article are presented as sourced from IANS and do not reflect views of Moneylife and hence Moneylife is not responsible or liable for the same. As a source and news provider, IANS is responsible for accuracy, completeness, suitability and validity of any information in this article.

User

Power without Accountability: Repeated Raps from Higher Judiciary are Dangerous Portents for SEBI
It is good that the capital market watchdog had the sagacity to defuse, with an unconditional apology, an ugly situation with dangerous portends. It was also mature of the Securities Appellate Tribunal (SAT) to accept the apology and close the matter in order to protect the regulator’s standing with market participants (on 26th July). But is it okay for the Securities & Exchange Board of India (SEBI) to be repeatedly caught making irresponsible use of its regulatory powers, only to backtrack when it is rapped hard by a high court or an appellate tribunal? Here is what happened this time. 
 
In May 2016, SAT had asked SEBI to give a hearing to Adventz Finance and dispose of the matter within seven weeks or by 24 June 2016. Since then, SAT granted SEBI at least three opportunities to produce the order passed by its whole-time member (WTM). After much subterfuge and trying to pass off a letter by a chief general manager as an order, SEBI finally admitted that no formal order had been passed. Even then, SAT first asked the member to pass an order immediately, only to be informed that he was travelling. A livid SAT ordered SEBI to pay a fine of Rs1 lakh to the appellant for giving him the ‘run around’ and ordered a different WTM to hear and decide the Adventz Finance matter. It also expressed distress at the manner in which SEBI’s WTM had ‘discharged his quasi-judicial’ duties and asked that its ruling be forwarded to the finance minister and the SEBI chairman. After SEBI tendered an unconditional apology, SAT has relented and withdrawn the fine and permitted the WTM, Rajeev Kumar Agarwal (not named in the order), to pass a proper order within a week. This is not the first time that SEBI’s conduct as a quasi-judicial body has been questioned. Consider two examples.
 
1. In March 2015, the Bombay High Court set aside SEBI’s first ever exercise of its newly acquired power of arrest in the case of Vinod Hingorani. The Court said that it had exercised “the power of arrest in total contravention of the provisions” and its order was “arbitrary, illegal and void.” In fact, the Court had called it an ‘abuse of power’. 
 
2. In April this year, SAT had rapped SEBI for passing contradictory and inconsistent orders and penalties for similar offences. It was especially disturbed by a SEBI lawyer’s stand that the regulator stood by ‘both orders’. While SAT castigated SEBI’s conduct as ‘disgraceful’, the problem lies at a level much higher than that of the lawyer who was probably following instructions from the top. 
 
Significantly, it is SAT rather than the regulator, who seems concerned that SEBI’s contradictory orders are “detrimental to the interests of the securities market.” The need for confidence in the market and its regulation ought to be a national concern. There should be consequences for those who fail to understand the responsibility and gravity of their role as regulatory body and as a quasi-judicial authority. SEBI is now armed with enormous powers of search, seizure, arrest, freezing of bank accounts, barring entities from the market and stoppage of operations that can damage businesses and irreparably destroy reputations. Only the very brave or well-funded intermediaries have the courage to challenge the regulator legally and risk vengeful action in the form of interim orders that can shut down a business, with no obligation on SEBI to provide a hearing or issue a time-bound order. 
 
Indeed, many are silently crushed due to their reluctance to challenge the regulator. SEBI’s misguided policies, cumbersome procedures and constant tinkering with the rules is only making people fearful, confused and frustrated. The capital market thrives on information and savvy investors slowly learn to separate the grain from the chaff of insiders and speculators. But SEBI’s rules often end up gagging open discussion in public forums even while it is unable to monitor or check shady tipsters who use social media and SMS with impunity to entrap gullible and ignorant investors.

User

COMMENTS

DR MUKESH

4 months ago

SEBI ,members should be made to pay for their inaction , lethargy and arrogance !. They think they are above the law , which will soon catch up with them . The judiciary is watching ! Beware guys !

Vaibhav Dhoka

4 months ago

The leniency is shown repeatedly to officials who gain arrogance at public cost.I would like to mention my correspondence with Mr Habibbullah then CIC,it was second appeal against RTI from SEBI the case was posted at New Delhi and a day before SEBI asked for adjournment,I wrote to CIC that officials attend at state cost and do not mind for eleventh hour adjournment as a common man one cannot go to New Delhi due to cost.He totally agreed with facts but said his inability to help as procedures.As procedures are prepared by babus they take undue advantage and judiciary or other authorities show leniency as they are also from same breed.

siva sankaran

4 months ago

Make the members of the SEBI pay for their actions/inactions.then only they learn in hard way

siva sankaran

4 months ago

Make the members of the SEBI pay for their actions/inactions.then only they learn in hard way

REPLY

DR MUKESH

In Reply to siva sankaran 4 months ago

Hon'ble PM , pl clean up SEBI now. They need to be shaken up !

Making Algorithms Accountable

Algorithms are ubiquitous in our lives. They map out the best route to our destination and help us find new music based on what we listen to now. But they are also being employed to inform fundamental decisions about our lives.

 

Companies use them to sort through stacks of résumés from job seekers. Credit agencies use them to determine our credit scores. And the criminal justice system is increasingly using algorithms to predict a defendant's future criminality.

 

Those computer-generated criminal "risk scores" were at the center of a recent Wisconsin Supreme Court decision that set the first significant limits on the use of risk algorithms in sentencing.

 

The court ruled that while judges could use these risk scores, the scores could not be a "determinative" factor in whether a defendant was jailed or placed on probation. And, most important, the court stipulated that a presentence report submitted to the judge must include a warning about the limits of the algorithm's accuracy.

 

This warning requirement is an important milestone in the debate over how our data-driven society should hold decision-making software accountable. But advocates for big data due process argue that much more must be done to assure the appropriateness and accuracy of algorithm results.

 

An algorithm is a procedure or set of instructions often used by a computer to solve a problem. Many algorithms are secret. In Wisconsin, for instance, the risk-score formula was developed by a private company and has never been publicly disclosed because it is considered proprietary. This secrecy has made it difficult for lawyers to challenge a result.

 

The credit score is the lone algorithm in which consumers have a legal right to examine and challenge the underlying data used to generate it. In 1970, President Richard M. Nixon signed the Fair Credit Reporting Act. It gave people the right to see the data in their credit reports and to challenge and delete data that was inaccurate.

 

For most other algorithms, people are expected to read fine-print privacy policies, in the hopes of determining whether their data might be used against them in a way that they wouldn't expect.

 

"We urgently need more due process with the algorithmic systems influencing our lives," says Kate Crawford, a principal researcher at Microsoft Research who has called for big data due process requirements. "If you are given a score that jeopardizes your ability to get a job, housing or education, you should have the right to see that data, know how it was generated, and be able to correct errors and contest the decision."

 

The European Union has recently adopted a due process requirement for data-driven decisions based "solely on automated processing" that "significantly affect" citizens. The new rules, which are set to go into effect in May 2018, give European Union citizens the right to obtain an explanation of automated decisions and to challenge those decisions.

 

However, since the European regulations apply only to situations that don't involve human judgment "such as automatic refusal of an online credit application or e-recruiting practices without any human intervention," they are likely to affect a narrow class of automated decisions.

 

In 2012, the Obama administration proposed a "consumer privacy bill of rights" — modeled on European data protection principles — that would have allowed consumers to access and correct some data that was used to make judgments about them. But the measure died in Congress.

 

More recently, the White House has suggested that algorithm makers police themselves. In a recent report, the administration called for automated decision-making tools to be tested for fairness, and for the development of "algorithmic auditing."

 

But algorithmic auditing is not yet common. In 2014, Eric H. Holder Jr., then the attorney general, called for the United States Sentencing Commission to study whether risk assessments used in sentencing were reinforcing unjust disparities in the criminal justice system. No study was done.

 

Even Wisconsin, which has been using risk assessment scores in sentencing for four years, has not independently tested whether it works or whether it is biased against certain groups.

 

At ProPublica, we obtained more than 7,000 risk scores assigned by the company Northpointe, whose tool is used in Wisconsin, and compared predicted recidivism to actual recidivism. We found the scores were wrong 40 percent of the time and were biased against black defendants, who were falsely labeled future criminals at almost twice the rate of white defendants. (Northpointe disputed our analysis. Read our response.)

 

Some have argued that these failure rates are still better than the human biases of individual judges, although there is no data on judges with which to compare. But even if that were the case, are we willing to accept an algorithm with such a high failure rate for black defendants?

 

Warning labels are not a bad start toward answering that question. Judges may be cautious of risk scores that are accompanied by a statement that the score has been found to overpredict recidivism among black defendants. Yet as we rapidly enter the era of automated decision making, we should demand more than warning labels.

 

A better goal would be to try to at least meet, if not exceed, the accountability standard set by a president not otherwise known for his commitment to transparency, Richard Nixon: the right to examine and challenge the data used to make algorithmic decisions about us.

 

ProPublica is a Pulitzer Prize-winning investigative newsroom. Sign up for their newsletter.

 

User

We are listening!

Solve the equation and enter in the Captcha field.
  Loading...
Close

To continue


Please
Sign Up or Sign In
with

Email
Close

To continue


Please
Sign Up or Sign In
with

Email

BUY NOW

The Scam
24 Year Of The Scam: The Perennial Bestseller, reads like a Thriller!
Moneylife Magazine
Fiercely independent and pro-consumer information on personal finance
Stockletters in 3 Flavours
Outstanding research that beats mutual funds year after year
MAS: Complete Online Financial Advisory
(Includes Moneylife Magazine and Lion Stockletter)