The Case: On a rainy evening in Bengaluru, an Indian startup called NavDrive was testing a self-driving shuttle on the Outer Ring Road with a safety driver sitting in front. The vehicle used cameras and other sensors, special maps, and software from different companies. As it neared a poorly lit zebra crossing, a person pushing a bicycle started crossing the road. The software kept changing its guess about what it was seeing and did not slow down when it was unsure. The shuttle moved at about 38–40 km/h and assumed the bicycle would go along the road, not across it. The safety driver looked down for a moment at a tablet alert; brakes came too late and a fatal crash happened. Police and ambulances came, the company saved all the on-board records, put out a short public note, and stopped all shuttles. A later review said: the training of the software had too few examples of a person pushing a cycle, dusk and rain made things worse, there was no rule to slow down when uncertain, the tablet distracted the driver, and there were gaps in testing and approval of a recent software update.

Q1. Should high-risk AI trials (like self-driving shuttles) face strict product liability in India?

Arguments for

  • Right to life first: Victims should get justice quickly; they shouldn’t have to prove complex tech faults.

  • Safety pressure: Companies will spend more on safety and testing if they know they’re clearly responsible.

  • Faster relief: Strict rules can mean quicker compensation through insurance funds.

  • Fairness: Balances the company’s control over data and know-how.

Arguments against

  • Innovation may slow: Startups may be scared off by unlimited risk.

  • Complacency risk: If pay-outs are automatic, some might cut corners unless there are penalties.

  • Higher costs: Insurance and compliance costs may raise prices or squeeze out small firms.

Balanced view

Use strict liability for such risky trials, plus: (1) a mandatory safety file checked by an independent auditor, (2) a no-fault compensation fund paid by operators/insurers, and (3) tough penalties for repeat safety failures.

Q2. Should India require a human safety driver during early self-driving trials?

Arguments for

  • Extra safety net: A human can step in if things go wrong.

  • Public trust: People feel safer seeing a person in charge.

  • Bridge period: Gives time for the tech to improve.

Arguments against

  • Sleepy oversight: Humans get distracted when the computer drives most of the time.

  • Blame game: Easy to blame the driver and ignore deeper system faults.

  • Slows true safety design: Companies may delay building cars that automatically slow when unsure.

Balanced view

Keep a human for now, but: use driver-attention cameras, keep the tablet and screens as quiet as possible, limit trials to approved routes and times, and plan to remove the driver later only after strong safety proof.

Q3. Should black-box recorders and clear explanations be mandatory in such trials?

Arguments for

  • Answerability: We must know why the car did what it did (what it “saw,” how sure it was, why it braked or didn’t).

  • Fair decisions: Courts need secure, time-stamped records to judge cases.

  • Learning: Open summaries after crashes help everyone improve.

Arguments against

  • Privacy: Video may capture faces and number plates; needs careful handling.

  • Trade secrets: Firms worry about revealing core methods.

  • Costs: Safe storage and access control add effort and money.

Balanced view

Make standard black-box logging compulsory with privacy safeguards (mask faces/plates, encrypt data, limited access), keep a copy with an independent trusted body, and publish plain-English summaries after incidents. Protect real trade secrets, but give full data to courts and regulators.

Q4. When should criminal charges apply to company officers after a fatal crash?

Arguments for

  • If they knew and ignored: If leaders hid safety problems, disabled safeguards, or lied in audits, criminal action is fair.

  • Deterrence: Reminds top management that safety is not optional.

  • Justice: Shows serious failure is not just a business expense.

Arguments against

  • Too harsh for honest mistakes: Complex systems can fail without bad intent.

  • Fear factor: May scare away responsible trials.

  • Many hands problem: Often many actors share blame; targeting one person may be unfair.

Balanced view

Use criminal law only when there’s clear proof of wilful or reckless behaviour (e.g., hiding known defects, faking tests, running outside approved routes/times, ignoring regulator orders). Otherwise, rely on heavy fines, trial suspensions, and mandatory fixes.

Bottom line (one minute summary)

To make AI on Indian roads safe and trusted:

  1. Clear responsibility with strict liability and quick compensation.

  2. Human safety driver for now, but with strong attention checks and a plan to phase out only after solid evidence.

  3. Black-box and explainability by law, with privacy protection.

  4. Criminal action only for wilful safety abuses; otherwise fines and fixes.

This mix protects people first, keeps innovation alive, and builds public confidence.

Share This Story, Choose Your Platform!

Start Yours at Ajmal IAS – with Mentorship StrategyDisciplineClarityResults that Drives Success

Your dream deserves this moment — begin it here.