HOME PAGE OF SEMMEL-WEIS.ORG Semmel Weis in Landeck, Austria with Die Silberspitze (SilverPeak) in the background Ignaz Semmelweis on Hungarian postage stamp Frodo Ring introduces RED BOX WARNING AGAINST KAPLAN-MEIER Grim Reaper introduces the SAN MIGUEL (2008) TRILOGY IS THERE AN ALTERNATIVE TO INFOMERCIALS? THESE FOUR RATS GAVE THEIR LIVES DEMONSTRATING THAT THERE IS AN ALTERNATIVE. Chinese woman holding giant, bamboo-eating rat Bright future for Johnson&Johnson DARZALEX
FDA Agent Badge gets you into THE JOHNSON & JOHNSON RAP SHEEP Dr ROBERT Z ORLOWSKI, one of the MAGNIFICENT EIGHT represented here as a SUPERMAN in the area of multiple myeloma 83% survival at Months=30 is the inflated survival indicated in a misleading Kaplan-Meier graph Dr Paul Richardson wearing the recommended badge which reflects that Big Pharma's power over him has the strength of $19.6 million Best Hospitals ranking by USNews HOW MANY SUBJECTS PER GROUP? Dr SAGAR LONIAL debates Dr Paul Richardson Meletios Dimopoulos
A VERY SPECIAL INTEREST HERE TO SEE YOU William James Mayo Dr Matt Kalaycio unable to dispute payment Incongruity arrow ODOMZO may exhibit a Kalaycio-Boom in a Mayo Clinic Rochester clinical trial having Dr Francis Buadi as Principal Investigator      


by Semmel Weis

First published 18Sep2017    Last edited 29Sep2017  11:45pm

An Example Of A Real Science Experiment Coming Out Of A UCSF Lab

Sherbenou et al (2016) start their article with the acknowledgement that a diagnosis of multiple myeloma amounts to a death sentence:  "Multiple myeloma is incurable by standard approaches because of inevitable relapse and development of treatment resistance in all patients."

Most multiple myeloma patients have been told this, or have read it, and their hope lies in "inevitable" being delayed a good many years, and sometimes rests on the remote possibility that they may find themselves to be exceptions to the rule.

But among the things that myeloma patients should know but are not told is that there are researchers who already do know how to cure myeloma, at least in lab mice, as is demonstrated in two experiments conducted at the University of California San Francisco (UCSF), as reported in the same article that has been quoted just above, abbreviated here as Sherbenou (2016).

We are not going to try to understand the two Sherbenou (2016) experiments in detail — the few characteristics that are relevant to our purposes can be taken in at a glance into Figures 3 and 4 below.

In both experiments, mice were injected with myeloma on Day = -10, which is to say the myeloma was allowed ten days to establish itself before treatment began.  Then came two weeks of treatment of different kinds for each group of mice, and after that simply monitoring the progress of the disease by bioluminescence imaging (BLI), a methodology we have already seen three instances of at ALTERNATIVE, a difference here being that both dorsal and ventral views are shown (instead of only ventral).

In both Sherbenou (2016) experiments, the control group receiving a placebo appears in the left-hand column titled PBS (standing for Phosphate-Buffered Saline, a solution that can be injected without effect because it is isotonic and nontoxic).  In the explanatory text in Figures 3A and 4A, "Tx" stands for "treatment".

Sherbenou (2016)

Journal of Clinical Investigation logo  
Antibody-drug conjugate targeting CD46 eliminates multiple myeloma cells

Daniel W. Sherbenou, Blake T. Aftab, Yang Su, Christopher R. Behrens, Arun Wiita, Aaron C. Logan, Diego Acosta-Alvear, Byron C. Hann, Peter Walter, Marc A. Shuman, Xiaobo Wu, John P. Atkinson, Jeffrey L. Wolf, Thomas G. Martin, and Bin Liu,     Journal of Clinical Investigation, 2016, 126, 4640-4653.      JCI.ORG

The Sherbenou (2016) Figure 3 Experiment

Figure 3.  In vivo CD46-ADC antimyeloma activity in the RPMI8226-Luc disseminated xenograft model. sherbenou-2016-fig-3-CD46-ADC-antimyeloma-activity

The Benefit Of The CD46-ADC Treatment Is Outstanding.  The fourth of the five columns, the CD46-ADC column, shows by far the whitest mice, meaning most myeloma-free, but does not show a complete cure, as the middle mouse exhibits small color spots on Day=0 through 21, and as Figure 3C, which extends to Day=200, shows the death of two CD46-ADC mice at Day=150.

The Bortezomib Column Suggests That Bortezomib Kills.  As the primary focus of my attention to date has been San Miguel (2008) which addressed the question of the safety and efficacy of the chemotherapy drug Bortezomib, my attention is drawn to the Bortezomib column on the far right, and where it can be seen that the only mice to have died by Day=21 are mice that have been treated with Bortezomib (3/5 = 60% have died) and which observation, together with what Katz (2015) found, as discussed in ALTERNATIVE, strengthens the impression that Bortezomib does kill.

Bortezomib Kills By Stealth.  Comparing the PBS and Bortezomib Columns shows that on Day=7, 14, and 21, the Bortezomib mice showed more white and less red, which is evidence of weaker myeloma.  Particularly significant is that on Day=14, the three Bortezomib mice that are about to die are giving off weaker myeloma signals than any of the Day=14 PBS mice which are not about to die.  And the two Bortezomib mice still alive at Day=21 show more white and less red than any of their PBS counterparts (but maybe nevertheless subsequently died earlier than their PBS counterparts, though that information isn't presented).  Therefore, what?

Therefore, it may be the case that even though Bortezomib kills, it gives off weaker myeloma signals which in a clinical setting perhaps distracts attention away from the fact that it is in the process of killing.  A viable hypothesis calling out for investigation.  The dubious significance of measures other than survival is why in evaluating a treatment, I restrict attention to survival.

To facilitate closer inspection of this Bortezomib-killing-by-stealth phenomenon, PBS and Bortezomib bioluminescence columns are shown enlarged in
APPENDIX 1 at the bottom of this article.

The "Kaplan-Meier" Graph Is Not A Kaplan-Meier Graph.  Kaplan-Meier analysis serves only a single purpose — to inflate survival statistics in research plagued by a high fled/shed loss of subjects.  As it is probable in Sherbenou (2016) that no mice fled (as by escaping from their cages, or refusing to participate), and that none were shed (as by researchers stopping the experiment at a time when some mice had been in it for fewer than 200 days), then running the data through Kaplan-Meier software produces only a standard survival graph, and thus receives no Kaplan-Meier inflation of survival curves, and in which case it is better to not mention that Kaplan-Meier software had been employed, as doing so might leave the impression that the curves are Kapan-Meier-inflated and Kaplan-Meier-misleading, when in fact they are not.

It is important to understand that this comment objects only to the presence of the words "Kaplan-Meier", and finds no fault with the graph itself, which is a standard survival graph.

The Bortezomib Curve Is Omitted From The Survival Graph.  The Survival Graph shows three (somewhat obscured by overlap) curves plunging to zero around Day=50.  Had the Bortezomib curve been included, it would have stood prominently alone, already dropped to Surviving Fraction = 0.4 at Day=21 (as is evident in the BLI display), and possibly plunging all the way to zero shortly after.  Sherbenou (2016) discourages readers from concluding that Bortezomib is equally destructive in the clinical setting with the caution: "It should be noted that this Bortezomib [dosage] schedule was chosen for comparison with the ADC, not to simulate clinical use, which would be continuous."  Johnson&Johnson would have displayed exemplary probity by quickly running off an experiment like Sherbenou (2016), but comparing a PBS group to six Bortezomib groups differing in dosage regimen, and demonstrating what they probably hope to be the case — that whereas Bortezomib does shorten mouse lives when administered at the wrong dosages, it extends mouse lives at optimal dosages.

The Sherbenou (2016) Figure 4 Experiment

Figure 4.  Dose- and schedule-dependent in vivo activity of CD46-ADC in a disseminated MM xenograft model with MM1.S cell line. sherbenou-2016-fig-4-CD46-ADC-antimyeloma-activity

Isn't This What A Cure Looks Like?  The Figure 3 experiment higher above had shown the CD46-ADC group faring much better than any of the other four groups, but falling short of a complete cure.  The Figure 4 experiment we are looking at now tests CD46-ADC at three dosage levels, which reveals these three producing effects superior to the other three comparison groups, and most importantly showing what might be called a cure at the highest dosage, which is 4mg/kg×4 appearing in the fifth of the six columns — not a spot of BLI color is to be seen in this column from Day=8 to 43 in the BLI display, and the corresponding survival curve in Figure 4C soars at 100% to what looks like Day=212, at which time monitoring ends, and by which time all mice in all other groups are long gone.

Eight Pieces Of Very Bad Advice For This UCSF Lab

The following recommendations as to the direction that research at this UCSF laboratory should follow come from an imaginary critic with no understanding of scientific method.  Although the recommendations are preposterous and counterproductive, they are presented here for good reason, as will be explained subsequently.

  1. OMIT THE ZERO-DRUG CONTROL GROUP.  You should stop including a zero-dose PBS group.  All anybody's interested in is which treatment is better than which other treatment.  As no patient is ever going to be given zero treatment, there's no point running a zero-treatment group.

  2. RUN ONLY TWO GROUPS.  You should only run two groups per experiment.  Ask a simple question, two groups give you the answer — that's the kind of research people understand.  Running five or six groups in one experiment just leads to confusion.

  3. RUN 682 SUBJECTS.  Who is going to believe research which runs only five mice per group?  It will take approximately 341 mice per group to inspire confidence, which over two groups gives 682 mice for the entire experiment.  Costly, but very impressive.

  4. CREDIT MANY AUTHORS SCATTERED ALL OVER THE WORLD.  And who's going to believe research published by only 15 authors, all huddled together in one place?  Makes it look like a cabal which nobody much wants to join.  Twenty-one authors, and scattered all over the world, would make it look like everybody everywhere agrees with you.

  5. DIVIDE THE RESEARCH AMONG 120 CITIES ALL OVER THE WORLD.  Who's going to believe research that was conducted only in the United States?  What about China?  And has Russia vanished from the map?  Has anybody heard of a place called Argentina?  You're going to be believed just as soon as you learn to be inclusive.  You should spread each experiment over 24 countries.  And working in only one city within each country is going to open you up to the accusation of favoritism.  Within the US, I can think of 14 suitable cities; within the whole world, you should have 120 cities.  In short, cut the experiment up into 120 pieces, then ship these off to the 120 cities spread over 24 countries.

  6. TRY TO LIMIT THE NUMBER OF INVESTIGATORS PER CITY TO ONE.  Each city needs to be managed by only one investigator, though a few cities might need a bit more than one.  What say you aim at 144 investigators, spread over 120 cities, each investigator running 5 mice?

  7. ENROLL YOUR SUBJECTS LOCALLY.  For your conclusions to be generalizable to not just the particular strain of mice that you buy from one local mouse breeder, but to all kinds of mice everywhere, you should ask the investigators at each of your 120 cities all over the world to capture and include in your experiment whatever mice abound locally.

  8. PAY YOUR INVESTIGATORS LIBERALLY.  To get the best people wherever you go, you will have to pay them well, and keep paying them better and better until you get the quality of performance you are looking for.

That the eight above recommendations are pointing the way toward some very bad research would only be the judgment of the scientific world.  The advertising world, the world of infomercial production, sees them as eight pillars of wisdom.  For example, every one of the above eight fantastic and pernicious recommendations is realized in the highly-acclaimed Johnson&Johnson infomercial — San Miguel (2008) — as can be verified below, though it employs human subjects instead of mice.

Anatomy Of Johnson&Johnson's San Miguel (2008) Infomercial

As San Miguel (2008) aspired to no more than compare the effect on multiple myeloma of adding Bortezomib to Melphalan and Prednisone, he needed only two groups, an MP group and a BMP group.  No zero-drug control group.  The number of patients in the MP group was 338, and in the BMP group was 344, for a total of 682 patients in the entire experiment.

The experiment was chopped up into 144 pieces, and each piece handed to a single investigator.  The investigators worked in 120 cities spread over 24 countries.  As is estimated in the paragraph below, each investigator ran approximately 5 subjects.

San Miguel (2008) does not report how many subjects participated in each country, but we can estimate.  The number of subjects participating in San Miguel (2008) is 682, divided by the number of investigators which is 144, gives the average number of subjects per investigator of 682/144 = 4.74.  To exemplify the computation of subjects per country, then, the estimated number of subjects (rounded to the nearest integer) run by New Zealand's sole investigator would be 4.74*1 = 5, and the estimated number of subjects run by Russia's 14 investigators would be 4.74*14 = 66.  If in the future we want a rough idea of how many subjects each investigator ran, we will say they averaged around 5.

The table below lays out the structure of San Miguel (2008), all information originating from San Miguel (2008), along with its Supplementary Appendix, with the exception of the CORRUPTION column on the far right which is the country ranking assigned by TRANSPARENCY INTERNATIONAL, with New Zealand=1/176 signifying least corrupt of the 176 countries rated by TRANSPARENCY INTERNATIONAL, and Russia=131/176 happening to be the most corrupt to be found in the San Miguel (2008) batch of 24 countries.  Russia is highlighted in red in the table to draw attention to the curious phenomenon that by far the most corrupt country of the 24 relied upon in San Miguel (2008), Russia seems nevertheless to have received among the warmest welcomes, being allowed 3 authors and 14 investigators.


Transparency International (TI) has published the Corruption Perceptions Index (CPI) since 1996, annually ranking countries "by their perceived levels of corruption, as determined by expert assessments and opinion surveys."  The CPI generally defines corruption as "the misuse of public power for private benefit".     Wikipedia

Some Characteristics Of The
Johnson&Johnson Business Model
As Manifested In The San Miguel (2008) Infomercial
1 Argentina
0 5
C Corrado
D Fantl
D Riveros
J Garcia
G Klein
Buenos Aires
La Plata
Logo Transparency International
2 Australia
0 5
N Horvath
A Spencer
R Hermann
M Hertzberg
P Marlton
Logo Transparency International
3 Austria
Austria flag
0 7
W Linkisch
E Gunsilius
A Petzer
R Greil
J Drach
H Gisslinger
J Thaler
Logo Transparency International
4 Belgium
Belgium flag
Rik Schots
Andrew Cakana
Helgi van de Velde
A Van de Velde
P Zachee
J Van Droogenbroeck
D Bron
W Feremans
A Ferrant
M Andre
A Janssens
M Delforge
Y Beguin
B De Prijck
C Doyen
H Demuynck
P Vermeulen
Mont Godinne
Logo Transparency International
5 Canada
Canada flag
0 6
N Bahlis
A Belch
S Fox
A Lavoie
M Voralia
S Dolan
Greenfield Park
Quebec City
St John
Logo Transparency International
6 China
China flag
Bin Jiang
H Ai
Y Zhao
F Meng
J Hou
Z Shen
Logo Transparency International
7 Czech Republic
Czech Republic flag
Ivan Spicka
R Hajek
V Maisnar
Hradec Kralove
Logo Transparency International
8 Finland
Finland flag
0 4
A Sikio
S Vanhatalo
E Koivunen
K Remes
Logo Transparency International
9 France
France flag
0 5
G Salles
M Michallet
C Hulin
J Harousseau
M Attal
Logo Transparency International
10 Germany
Germany flag
Rudolph Schlag
Martin Kropff
M Welslau
M Haenel
C Gabor
W Knauf
M Engelhardt
H Durk
H Goldschmidt
G Hess
M Schlag
Logo Transparency International
11 Greece
Greece flag
Meletios A Dimopoulos
N Zoumbos
K Zervas
Logo Transparency International
12 Hungary
Hungary flag
0 6
S Fekete
T Masszi
G Tarkovacs
A Illes
G Radvanyi
Z Borbenyi
Logo Transparency International
13 Ireland
Ireland flag
0 1
M O'Dwyer
Logo Transparency International
14 Israel
Israel flag
Ofer Shpilberg
J Rowe
D Ben-Yehuda
A Nagler
A Berrebi
Logo Transparency International
15 Italy
Italy flag
Maria T Petrucci
Antonio Palumbo
M Cavo
E Morra
A Pinto
M Lazzarino
A Liberati
R Foa
P De Fabritiis
P Musto
M Boccadoro
San Giovanni Rotondo
Logo Transparency International
16 Korea
Korea flag
0 5
K Kim
C Min
Y Min
C Suh
S Yoon
Logo Transparency International
17 New Zealand
New Zealand flag
0 1
S Gibbons
Logo Transparency International
18 Poland
Poland flag
Anna Dmoszynska
J Kloczko
A Hellmann
J Holowiecki
T Robak
M Komarnicki
K Sulek
K Kuliczkowski
Logo Transparency International
19 Russia
Russia flag
Nuriet K Khuageva
Olga S Samoilova
Kudrat M Abdulkadyrov
Y Dunaev
A Loginov
A Suvorov
M Biakhov
A Golenkov
O Rukavitsyn
V Savchenko
N Domnikova
V Pavlov
V Rossiev
J Alexeeva
G Gaisarova
V Patrin
V Yablokova
St Petersburg
Logo Transparency International
20 Spain
Spain flag
Jesus F San Miguel
Maria-Victoria Mateos
J Bladé
A Sureda
A Alegre
J Diaz-Mediavilla
J Hernandez
J De La Rubia
L Palomera
Logo Transparency International
21 Sweden
Sweden flag
0 2
B Bjorkstrand
A Gruber
Logo Transparency International
22 Taiwan
Taiwan flag
0 5
C Kuan-Der Lee
C Kuo
T Chao
S Huang
L Shih
Chiayi Hsien
Logo Transparency International
23 United Kingdon
United Kingdom flag
0 5
M Kazmi
J Cavet
N Russell
T Littlewood
S Rule
Logo Transparency International
24 United States
United States flag
Kenneth C Anderson
Dixie L Esseltine
Kevin Liu
Paul G Richardson
S Noga
D Irwin
C Holladay
D Bruetman
F Butler
W Hanna
R Lewis
J Berdeja
G Schiller
N Callander
J Gurtler
V Morrison
T Roberts
J Glass
Baltimore MD
Berkeley CA
Charleston SC
Goshen IN
Indianapolis IN
Knoxville TN
Lake Charles LA
Loma Linda CA
Los Angeles CA
Madison WI
Metairie LA
Minneapolis MN
New Orleans LA
Shreveport LA
Logo Transparency International

One justification of the above thin scattering of investigators all over the globe might be that multiple myeloma is thinly distributed, so that once an investigator has used up the five patients within reach, there are not enough left to continue in that location.

But only five patients in a city as populous as Los Angeles?  Only five in each of Baltimore, London, Manchester, Warsaw, St Petersburg?

It may be wondered whether multiple myeloma patients are truly that scarce.  For the entire United States, the American Cancer Society estimates that 30,280 new cases will be diagnosed during 2017.  And the National Cancer Institute estimates that in the United States in 2014, there were 118,539 people living with multiple myeloma.

Given that Johnson&Johnson needed only say 12 patients per group, 24 patients for the entire two-group clinical trial, couldn't these have been found in a single location, say Toronto's Princess Margaret Lymphoma and Myeloma Site Group which treats more than 6,000 patients each year, or Rochester Minnesota's Mayo Clinic which treats more than 3,400 multiple myeloma patients annually?

The hypothesis that Johnson&Johnson was forced to perform a 24-Country-Experiment because of a scarcity of patients has the rug yanked from under its feet by the observation that it has become standard practice for pharmaceutical companies to (1) deliberately spread their research over more sites than necessary to acquire subjects, and to (2) deliberately limit the recruitment of subjects at each site by terminating enrollment while subject applications are still pending:


International Clinical Trials In Serbia: Why Not Enroll The Right Patients Fast?
Jelena Barjaktarovic, Maxim Belotserkovsky, Mirjana Jovanovic, Melita Zonic    Clinical Leader    15 Nov 2013

"Moreover, the practice of adding more sites per study than necessary and requiring each site to recruit fewer subjects per site is a standard, although questionable, risk mitigation practice."

Suggested in the above statement is that the "standard practice" of scattering pieces of an experiment over the surface of the earth while limiting subjects per piece is somehow "risk mitigating" — but what risk it is that is being mitigated is not explained.  The further information provided below may begin to suggest that the risk being mitigated is the risk to the pharmaceutical company that its clinical trial will reveal that the drug being tested is ineffective or harmful.

How Infomercials Come To Be So Different From Scientific Research

Below are excerpts from a Vanity Fair article which throws light on the burgeoning phenomenon of outsourcing/offshoring of pharmaceutical research:


Deadly Medicine

Prescription drugs kill some 200,000 Americans every year.  Will that number go up, now that most clinical trials are conducted overseas — on sick Russians, homeless Poles, and slum-dwelling Chinese — in places where regulation is virtually nonexistent, the F.D.A. doesn’t reach, and "mistakes" can end up in pauper’s graves?  The authors investigate the globalization of the pharmaceutical industry, and the U.S. Government’s failure to rein in a lethal profit machine.



TAKE TWO ASPIRIN illustration to accompany Vanity Fair article on outsourcing research
More and more clinical trials for new drugs are being outsourced overseas and conducted by companies for hire.  Is oversight even possible?
  Photographs © Imagebroker/Alamy, from Image Source/Jupiter Images, © Vincent O’Byrne/Alamy (skulls); © Jason Salmon/Alamy (capsule).


"Rescue Countries"

One big factor in the shift of clinical trials to foreign countries is a loophole in F.D.A. regulations: if studies in the United States suggest that a drug has no benefit, trials from abroad can often be used in their stead to secure F.D.A. approval.  There’s even a term for countries that have shown themselves to be especially amenable when drug companies need positive data fast: they’re called "rescue countries."  Rescue countries came to the aid of Ketek, the first of a new generation of widely heralded antibiotics to treat respiratory-tract infections.  Ketek was developed in the 1990s by Aventis Pharmaceuticals, now Sanofi-Aventis.  In 2004 — on April Fools’ Day, as it happens — the F.D.A. certified Ketek as safe and effective.  The F.D.A.’s decision was based heavily on the results of studies in Hungary, Morocco, Tunisia, and Turkey.

The approval came less than one month after a researcher in the United States was sentenced to 57 months in prison for falsifying her own Ketek data.  Dr. Anne Kirkman-Campbell, of Gadsden, Alabama, seemingly never met a person she couldn’t sign up to participate in a drug trial.  She enrolled more than 400 volunteers, about 1 percent of the town’s adult population, including her entire office staff.  In return, she collected $400 a head from Sanofi-Aventis.  It later came to light that the data from at least 91 percent of her patients was falsified.  (Kirkman-Campbell was not the only troublesome Aventis researcher.  Another physician, in charge of the third-largest Ketek trial site, was addicted to cocaine.  The same month his data was submitted to the F.D.A. he was arrested while holding his wife hostage at gunpoint.)  Nonetheless, on the basis of overseas trials, Ketek won approval.  [...]

Today it is mainly independent contractors who recruit potential patients both in the U.S. and — increasingly — overseas.  They devise the rules for the clinical trials, conduct the trials themselves, prepare reports on the results, ghostwrite technical articles for medical journals, and create promotional campaigns.  The people doing the work on the front lines are not independent scientists.  They are wage-earning technicians who are paid to gather a certain number of human beings; sometimes sequester and feed them; administer certain chemical inputs; and collect samples of urine and blood at regular intervals.  [...]

What began as a mom-and-pop operation has grown into a vast army of formal "contract-research organizations" that generate annual revenue of $20 billion.  [...]

The F.D.A., the federal agency charged with oversight of the food and drugs that Americans consume, is rife with conflicts of interest.  Doctors who insist the drug you take is perfectly safe may be collecting hundreds of thousands of dollars from the company selling the drug.  (ProPublica, an independent, nonprofit news organization that is compiling an ongoing catalogue of pharmaceutical-company payments to physicians, has identified 17,000 doctors who have collected speaking and consulting fees, including nearly 400 who have received $100,000 or more since 2009.)  [...]

The economic incentives for doctors in poor countries to heed the wishes of the drug companies are immense.  [...]  "In Russia, a doctor makes two hundred dollars a month, and he is going to make five thousand dollars per Alzheimer’s patient" that he signs up.  [...]

The F.D.A. gets its information on foreign trials almost entirely from the companies themselves.  It conducts little or no independent research.  The investigators contracted by the pharmaceutical companies to manage clinical trials are left pretty much on their own.  In 2008 the F.D.A. inspected just 1.9 percent of trial sites inside the United States to ensure that they were complying with basic standards.  Outside the country, it inspected even fewer trial sites—seven-tenths of 1 percent.  In 2008, the F.D.A. visited only 45 of the 6,485 locations where foreign drug trials were being conducted.

The pharmaceutical industry dismisses concerns about the reliability of clinical trials conducted in developing countries, but the potential dangers were driven home to Canadian researchers in 2007.  While reviewing data from a clinical trial in Iran for a new heart drug, they discovered that many of the results were fraudulent.  "It was bad, so bad we thought the data was not salvageable," Dr. Gordon Guyatt, part of the research group at McMaster University in Hamilton, told Canada’s National Post.  [...]

The Duke team noted that, in some places, "financial compensation for research participation may exceed participants' annual wages [...].  [...]

What Else Is Wrong With The San Miguel (2008) Infomercial?

That is, what else are we now finding to be wrong beyond what has been discussed in earlier articles on semmel-weis.info?

  1. SAN MIGUEL (2008) LACKED A ZERO-DRUG CONTROL GROUP.  Can't altogether deny treatment to human subjects fighting for their lives? — Agreed!  But in that case it is obligatory to run the proposed research in a preliminary mouse or rat experiment before running it in a human clinical trial, and that preliminary and prerequisite mouse or rat experiment could include the zero-drug control group.  This presently-missing control group is important because it alone is able to detect whether the favored drug being tested is worse than no drug at all.

  2. SAN MIGUEL (2008) RAN ONLY TWO GROUPS.  Researchers trying to understand an area of study have hundreds of questions, and try to answer as many as they can pack into each experiment by running several groups at one time — five groups, or six, in the two Sherbenou (2016) experiments above, and further below is recommended a twenty-group murine-model experiment as a prerequisite to running every clinical trial.  On the other hand, ad creators dedicating vast resources to answering only one narrow question in a two-group study are not primarily aiming to broaden understanding, but rather are seeking to legitimize a one-line conclusion that they can place before regulatory agencies, like the American FDA, in support of their application for the approval of their product.  The San Miguel (2008) conclusion was simply "Bortezomib plus melphalan-prednisone was superior to melphalan prednisone alone in patients with newly-diagnosed myeloma who were ineligible for high-dose therapy".

  3. IN HIS TWO-GROUP STUDY, SAN MIGUEL (2008) RAN 682 SUBJECTS.  Twenty-four subjects, 12 per group, would have been ample.  As we have seen in the two Sherbenou (2016) experiments above, even 5 per group is ample when have highly-homogeneous subjects at the outset, and on top of that have high standardization of procedures, and also when expecting the manipulations to have a strong effect.

    A hyper-inflated number of subjects is relied on for several reasons, none of them good.  It creates a positive (but false) impression of quality research in the public mind.  It sets a precedent of expensive reasearch which inhibits the development of effective treatments by those who lack fat wallets.  It excuses searching for subjects offshore (where we now learn that research results can be bought more readily).  It uses up available subjects which interferes with the research of others.

    On the question of using up patients so as to frustrate the research of others, perhaps the highlighted text below points to an additional way this is being accomplished:

    Los Angeles Times logo

    Why the federal government urgently needs to fund more cancer research

    By Frank Lalli     SEPTEMBER 5, 2017    LA Times

    [...]  This sets back basic cancer research in several ways.  Because drug companies are investing millions, if not billions, to develop proprietary, patented medicines, they don’t share their discoveries as openly as the NIH does.  Perhaps worse, they recklessly duplicate trials for certain common cancers, such as melanoma, to the point where around 40% fizzle out for a lack of patients to test.  [...]

    In any case, the only legitimate use of a very large number of subjects is that it alone permits the detection of weak effects, but this only legitimate use is still a poor use for the reason that weak effects should be studied only after strong effects have been discovered and examined and understood, which today is far from the situation.  And in any case, San Miguel (2008) is not discovering any effects, weak or strong, because its methodology is so profoundly and irretrievably flawed that it is discovering nothing.

  4. SAN MIGUEL (2008) CREDITED 21 AUTHORS SCATTERED ALL OVER THE WORLD.  Which invites the question of whether so many, and so widely-separated, people really collaborated in designing and running and analyzing the project, or whether Johnson&Johnson mainly paid them for the use of their names, with the pretend-researchers benefitting also from padding their curriculum vitae with another publication.

    Twenty-one geographically-dispersed authors can fail to arouse public suspicion only as long as the public remains unaware of the volume of deception that goes on in authorship assignment (one variety of which deception goes by such names as hyperauthorship, kilo-authorship, honorary authorship, guest authorship, gift authorship, pseudo-authorship, and promiscuous authorship):

    SCIENCE logo

    Ghostwriters in the Medical Literature

    By Susan Gaidos   Science  Nov. 12, 2010

      Adriane Fugh Berman logo

    When physician Adriane Fugh-Berman of Georgetown University School of Medicine in Washington, D.C., was asked to write a review article on interactions between herbs and warfarin, she said maybe.  [...]

    A few months later, a finished manuscript arrived on her desk.  All she needed to do was read and approve it.  [...]

    American Physiological Association logo

    Authorship: why not just toss a coin?

    Kevin Strange
    American Journal of Physiology — Cell Physiology   Published 5 September 2008 Vol. 295 no. 3, C567-C575s (footnotes deleted)

    [...]  “Ghost authors” are authors whose names are omitted from a paper.  There can be numerous deceitful reasons for ghost authorship.  For example, it is well known that some pharmaceutical companies hire professional writers to write papers favorably describing their products.  A bona fide academic is then asked or hired to sign their name to the paper to give it and the product legitimacy.  [...]

    The Consequences of Authorship Abuse

    Inappropriately assuming authorship on scientific papers can and should have significant negative consequences for those who choose to do so.  One of the most infamous examples of the consequences of promiscuous authorship is the “Darsee affair”.  Dr. John Darsee was a clinician investigator who worked at Harvard Medical School and Emory University School of Medicine.  From 1978 to 1981, Darsee authored or coauthored 18 full-length research papers and over 100 abstracts, reviews, book chapters, and short papers in the field of cardiology.  In May 1981, Darsee admitted to fabricating data in a single paper.  However, investigative committees at Harvard, Emory, and the National Institutes of Health (NIH) ultimately concluded that more than 100 of his publications contained fabricated data.  Many of the fraudulent publications listed authors who had made no contribution to the work.  In some cases these authors became aware that their names were associated with the work only after publication, while in other cases, individuals knowingly accepted the “gift authorship”.  When the publications were shown to be fraudulent, the “gift authors” were placed in the disquieting position of proving that they had not participated in the fraud and rationalizing why they could take no responsibility for the work even though they had assumed authorship of it.

    Another infamous case of fraud and promiscuous authorship is that of Robert Slutsky, a clinical investigator at the University of California at San Diego (UCSD).  From 1983 to 1984, it was estimated that Slutsky published on average one paper every ten days.  An investigating committee at UCSD concluded that as many as 68 of Slutsky's publications were likely to be fraudulent or of “questionable validity”.  As with Darsee, gift authorships were a common feature of Slutsky's publications.  The UCSD report states that knowing acceptance of coauthorship by investigators who had made no significant contribution to the work made a “mockery of authorship of scientific manuscripts, and in this case may have contributed to the perpetuation of research fraud”.  [...]

    Nature logo


    A. J. van Loon    Nature 389, 11 (4 September 1997)


    In more than 25 years working as a scientific editor (in geology, nuclear energy and technology) and in national and international editorial organizations, I have not been aware of any valid argument for more than three authors per paper, although I recognize that this may not be true for every field.

    Perhaps scientific journals and international organizations could take the lead. If journals were to instruct authors that manuscripts with more than three authors would not normally be considered for publication, there would soon be a drop in the number of pseudo-authors.

  5. SAN MIGUEL (2008) SPREAD A TWO-GROUP EXPERIMENT TO 120 CITIES ALL OVER THE WORLD.  Whereas one experimenter running a total of 24 people at one facility permits close monitoring and standardization of equipment and procedure, running a total of 682 people who are measured and treated and tested in 120 cities, permits only negligible monitoring and standardization, and rather guarantees that what happens during treatment and testing will differ from place to place.

    Furthermore, if an investigator tends to stop work after running approximately five subjects, then no investigator will acquire enough experience to be considered highly trained, and therefore all will be more like beginners, prone to do things in their own idiosyncratic ways, which reduces standardization.

    Every variety of research will have a long list of hidden pitfalls of which only experts in the area are aware and know how to prevent, such that entrusting the conduct of an experiment to unmonitored strangers in distant lands speaking unfamiliar languages guarantees that they will fall into one of these pitfalls after another and will ruin the validity of their data.

  6. SAN MIGUEL (2008) MAY HAVE SUCCEEDED IN LIMITING THE NUMBER OF INVESTIGATORS PER LOCATION TO ONE.  Most commonly, each city in San Miguel (2008) hosted only a single investigator, but it is possible that in the infrequent case that more than one investigator was recruited in a city, each worked in a different facility within that city, such that all 144 of the investigators may have worked alone.

    Generally, it is best to have a research team working together because they can watch each other, and learn from each other, and correct each other's errors, and cover for each other, and blow the whistle if impropriety is observed.  What motivation, then, would compel Johnson&Johnson to adopt a practice as strange and damaging to scientific integrity as isolating investigators from each other?

    I can think of only one explanation, which is that if the research were conducted in a single facility, then any substantial data falsification would become known to all, and which would leave the sponsor at the mercy of both whistle-blowers and blackmailers, but which can't happen if each investigator works in isolation and falsifies data on his own initiative, having been encouraged to it by sponsor demonstration of weak verification combined with reward for production of desired results.

  7. SAN MIGUEL (2008) ENROLLED SUBJECTS LOCALLY THE WORLD OVER.  The effect of which was to raise subject heterogeneity impermissibly.  Pre-treatment equality of subject characteristics between the various groups in an experiment is largely achieved by random assignment of subjects to groups, and yet it is so important a goal that it is traditionally given a boost by also starting out with a group of pre-randomization subjects who are as homogeneous as possible.  In a typical clinical trial, for example, the too-young will be excluded from the pre-randomization group as will be the too-old, and the too-sick will be excluded along with the too-healthy, and so on.  In animal experiments, the subjects will all be not only of the same species but of the same strain, almost always a strain that has been previously used in hundreds of other experiments, and of course of the same age, and often even of the same sex.  A scientist will not include in his experiment whatever mice can be gathered around the world because mice come in more than one thousand species, and their ages will differ as well, and what bacteria and viruses and parasites and toxins they carry, and their diets, and in the hundred other ways that a mouse captured in an Argentinian sewer can differ from a mouse captured on a Chinese mountainside.  The greater the heterogeneity that is allowed among the pre-randomization subjects, the weaker becomes the guarantee of post-randomization, pre-treatment equality of groups that the randomization hopes to achieve.

    The same is true when the subjects are people.  When Johnson&Johnson admitted into the San Miguel (2008) project 682 people from around the world, it was inviting an unacceptable level of pre-randomization subject heterogeneity which acted to undermine the post-randomization and pre-treatment equality which is so essential to a valid experiment.

    APPENDIX 2 supplies a graphic reminder of the subject heterogeneity, whether of people or of mice or in the collection gathered here, of rats, that is likely to be encountered upon roaming the world.

  8. SAN MIGUEL (2008) MAY HAVE GUARANTEED CORRUPTION.  Investigators and other employees scattered all over the world are not only paid, but are sometimes paid generously, and even extravagantly.  Given that infomercial creation is open-label, everyone involved will understand what results the sponsor expects from what subjects, and will be loath to disappoint the sponsor by giving him unwelcome results, and will be loath to risk losing the continuing income from future clinical trials by sponsors choosing to outsource their work to alternative locations which are better at manufacturing happy data, alternative locations which have earned the bankable designation "rescue countries".

    So, it may be wondered, did Johnson&Johnson lug a chunk of its research all the way to, for example, Siberia in order to take advantage of a better rescue package?  No way of knowing — San Miguel (2008) presents no breakdown of data by location.

Pharmaceutical Companies Are Not All The Same

Whereas Johnson&Johnson conducted its Bortezomib-promoting San Miguel (2008) VISTA clinical trial in 24 countries, from Argentina to the United States, Celyad is presently conducting its CAR-T NKR-2, THINK trial in 2 countries: Belgium and the United States.

Whereas Johnson&Johnson conducted San Miguel (2008) in countries sporting TRANSPARENCY INTERNATIONAL, Corruption-Perception-Index ranks as abysmal as China=84, Argentina=95, and Russia=131, Celyad THINK dips only as low as United States=18.

Whereas Johnson&Johnson had San Miguel (2008) run only 2 groups, Celyad THINK is running 7 groups, each typified by one of the seven cancers being treated — five solid (ovarian, bladder, colorectal, pancreatic, and triple-negative breast) and two hematological (acute myeloid leukemia and multiple myeloma).

Whereas Johnson&Johnson conducted VISTA using Bortezomib-Group N=344 and Control-Group N=338, Celyad THINK plans N=14 per each of its seven cancer-type groups.

Perhaps what these four comparisons suggest is that whereas Johnson&Johnson produces crowd-pleasing infomercials, Celyad conducts scientific research.

Johnson&Johnson Should Stop Passing Off
Infomercials As Scientific Research

Even before taking into consideration anything written on the instant page, we had already seen many and weighty reasons for regarding San Miguel (2008) not as scientific research but as a sales-promotion effort:

San Miguel (2008) Flunks The Test Of Research Integrity

Of the Five Requirements that had to be satisfied to achieve scientific validity, San Miguel (2008) blatantly violated four.  Violating just one would have proven fatal to the validity of the study's conclusions, but violating four actually imbues San Miguel (2008) with an alternative utility — as exercise material in scientific-method courses teaching students to distinguish scientific research from Madison-Avenue-inspired infomercial creation.    TRILOGY

What the 24-Country-Experiment analysis adds is a second invalidation of San Miguel (2008) which is as totally devastating as was the first, and which two invalidations taken together render the invalidity of SanMiguel (2008) so palpably egregious as to be unexplainable by merely the naivete and under-education of Johnson&Johnson executives.  It is only when we begin to see the money that is simultaneously trading hands that we realize that greed is able to supply the powerful motivation needed to so outrageously deviate not only from scientific method but also from common sense.

Thus, we see in the TREFIS material below that Johnson&Johnson revenue from Velcade (same as Bortezomib) was $1.2 billion in 2016, though it is expected to decline to a mere $150 million by 2023.

Not to worry on behalf of Johnson&Johnson, though!  It has a new multiple myeloma drug already in the works — Darzalex — and which is expected to more than compensate for the decline of Velcade, Darzalex moving from $572 million in 2016 to an expected greater than $4 billion by 2023, and furthermore having "peak sales potential" of $7 billion.


Imbruvica And Darzalex Are of Key Importance For J&J's Future Growth


The chart below shows how different oncology drug revenues will shape up in the coming years, according to our estimates.     (TREFIS, 19 Sep 2017):

Johnson&Johnson predicted oncology drug revenues, graph


Johnson&Johnson predicted oncology drug revenues, text


We wish Johnson&Johnson the best of luck, and hope that it not only meets its high revenue expectations, but exceeds them — but at the same time we ask Johnson&Johnson to legitimize this revenue by first satisfying four requirements.

First, the requirement that Johnson&Johnson disclose to all Bortezomib patients, present and future, that it has been proposed, and remains undisputed, that some, possibly all, of the research offered in Bortezomib's support is so flawed as to prove nothing, and that whatever Bortezomib treatment they are about to receive, either in clinical trials or in therapy, has never been tested on animals.

Second, the requirement that Johnson&Johnson go to the comparatively trivial expense of performing the definitive rat experiment establishing the fundamental truths concerning bortezomib and multiple myeloma — the experiment consisting of a 2 × 3 × 3 factorial with the variables myeloma infection vs no myeloma infection, times three magnitudes of Bortezomib dosage, times three durations of Bortezimib treatment, which gives 18 groups, plus two more groups, neither getting treatment but one getting myeloma and the other not, as laid out in the table below.  Twenty groups with, say, seven mice per group, is 140 mice.  Now and for all time, let such a murine model be completed as a prerequisite to justify and to guide the parallel human clinical trial, and let its results be disclosed to all patients in clinical trials and clinical treatment.  Do it because it is essential to learn from the myeloma-free Groups 1 to 10 just how lethal Bortezomib is when its effects are not hidden behind the effects of myeloma.  Do it as a prerequisite to every clinical trial, or else admit that it is in Johnson&Johnson commercial interests to prolong ignorance of what happens to animals when they are put through the treatments that Johnson&Johnson sells to humans.

Bortezomib Dosage Bortezomib Dosage
 None    Low   Medium  High   None    Low   Medium  High 
Long   8 9 10   18 19 20
Medium 5 6 7 15 16 17
Short 2 3 4 12 13 14
Zero 1   11  

Third, the requirement that Johnson&Johnson stop manufacturing worthless infomercials and start performing scientific research.

Fourth, the requirement that Johnson&Johnson retract all the worthless infomercials that it has ever produced, at least the ones that continue to be still relied upon today.

Although satisfying the above four requirements is clearly in the interests of the advancement of science and of medicine, and clearly in the interests of protecting the health and life of patients, it is just as clearly contrary to the short-term business interests of Johnson&Johnson, and so may not get done.  But if left undone, then those anticipated billions of revenue dollars will be gained by selling a drug which not only lacks scientific authentication, but which also shows signs of shortening life.

A Fourth Red Box Warning

In view of the above information on foreign outsourcing of big-N clinical trials, to the KAPLAN-MEIER redbox and the TRILOGY redbox and the ALTERNATIVE redbox there may now be added the following 24-COUNTRY-EXPERIMENT redbox:

This unofficial
is not yet an
FDA logo
but should be:

Among the signals that what is represented as a drug-testing clinical trial is in reality an infomercial, are that it was spread over many countries (as for example 24), and within each country over many locations (for example, totalling 120), and at each location limited to very few investigators (usually only 1), and each investigator limited to very few subjects (as for example 5).  When faced with conclusions based on such a procedural pattern, or beginning to resemble such a pattern — every patient has the right to begin verifying the credibility of the research by receiving answers to at least the following five questions:

  1. Does not mustering patients all over the world result in high pre-randomization heterogeneity of patients which weakens the power of random assignment to produce pre-treatment equality of patient characteristics?

  2. Does not entrusting patient treatment and testing to a large number of different investigators, and in their different clinics, and speaking their different languages, and all over the world make it impossible to monitor and verify and standardize the treatments they deliver and the measurements they record?

  3. If it were the case that investigators in "rescue countries" are motivated to continue receiving clinical-trial revenue, might they be tempted to manufacture results which they know the sponsors have come to their countries to buy?

  4. Is there any explanation for drastic compartmentalization of investigators in pharmaceutical research other than that compartmentalization serves to conceal wrongdoing?

  5. May it be the case that a clinical trial which parades its near-term disease measures, while neglecting long-term survival, is promoting a drug whose killing of healthy cells hurts more than its killing of cancer cells helps?


Close-up of PBS and Bortezomib Groups in Sherbenou (2016) Figure 3B.

The phenomenon clarified in this enlarged view has been referred to above as "Bortezomib kills by stealth" and is based on the observation that although the Bortezomib mice die sooner (3/5 = 60% are dead by Day=21), while alive their myeloma symptoms are less severe.  For example, at Day=14, mice 1, 2, and 4 in the Bortezomib Group (which three mice are about to die) may perhaps be exhibiting more white and less red (indicative of weaker myeloma) than the average PBS rat of whom none are about to die.

And then at Day=21, the two Bortezomib mice still alive are definitely exhibiting weaker myeloma symptoms (again, by showing more white and less red) than their five PBS counterparts, and perhaps these two Bortezomib mice will nevertheless also die sooner, though we don't know because this BLI display does not go beyond Day=21, and because the Bortezomib group was not included in the Figure 3C survival graph which extends to Day=200.

The explanation of the Bortezomib killing-by-stealth phenomenon may be that Bortezomib kills both cancer and healthy cells.  Tracking only the cancer cells makes it look like the rat is benefitting.  But tracking longevity reveals that the killing of healthy cells hurts more than the killing of cancer cells helps.  That's why the outcome measure most worth looking at is survival.

Shernbenou 2016 Figure 3, PBS column only      Shernbenou 2016 Figure 3, Bortezomib column only


Graphic Reminder Of The Threat Of Pre-Randomization-Subject Heterogeneity

Casting all over the world increases the heterogeneity of people and of animals — an obvious fact which designers of drug infomercials hope the public will overlook.  The photos below are not intended to reflect the variety of subject characteristics in San Miguel (2008), or to typify either the people or the animals found in the locations printed below — but only to suggest that selecting people or animals from all over the globe is likely to result in far greater heterogeneity than recruiting people from, say, the Boston area, or ordering rats or mice from a laboratory-supply breeder.

Australia: Human and Rat heterogeneity, Australia

Human and Rat heterogeneity, Australia

Human and Rat heterogeneity, Bangladesh

Human and Rat heterogeneity, Bangladesh

Human and Rat heterogeneity, Burma

China: Human and Rat heterogeneity, China

Human and Rat heterogeneity, India

Human and Rat heterogeneity, India

Human and Rat heterogeneity, Indonesia

Human and Rat heterogeneity, Pakistan (Peshawar)

Human and Rat heterogeneity, Paraguay

South Africa

St Petersburg:
Human and Rat heterogeneity, St Petersburg

Human and Rat heterogeneity, Tanzania

Human and Rat heterogeneity, Tanzania

Human and Rat heterogeneity, Thailand

United Kingdom:
Human and Rat heterogeneity, United Kingdom