Get instant access to the latest updates in academic publishing
The discussion section of your manuscript can be one of the hardest to write as it requires you to think about the meaning of the research you have done. An effective discussion section tells the reader what your study means and why it is important. In this article, we will cover some pointers for writing a clear, well-organized discussion and conclusion sections and discuss what should NOT be part of these sections.
Your discussion is, in short, the answer to the question “what do my results mean?” The discussion section of the manuscript should come after methods and results section and before the conclusion. It should relate back directly to the questions posed in your introduction, and contextualize your results within the literature you have covered in your literature review. In order to explain to your reader, you should include the following information:
Your discussion should NOT include any of the following information:
There are several ways to make the discussion section of your manuscript effective, interesting, and relevant. Most writing guides recommend listing the findings of your study in order from most to least important. You would not want your reader to lose sight of the key results that you found. Therefore, put the most important finding front and center.
Imagine that you conduct a study aimed at evaluating the effectiveness of stent placement in patients with partially blocked arteries. You find that despite this being a common first-line treatment, stents are not effective for patients with partially blocked arteries. The study also discovers that patients treated with a stent tend to develop asthma at slightly higher rates than those who receive no such treatment.
Which sentence would you choose to begin your discussion?
Our findings suggest that patients who had partially blocked arteries and were treated with a stent as the first line of intervention had no better outcomes than patients who were not given any surgical treatments.
Our findings noted that patients who received stents demonstrated slightly higher rates of asthma than those who did not. In addition, the placement of a stent did not impact their rates of cardiac events in a statistically significant way.
If you chose the first example, you are correct. If you aren’t sure which results are the most important, go back to your research question and start from there. The most important result is the one that answers your research question.
It is also necessary to contextualize the meaning of your findings for the reader. What does previous literature say, and do your results agree? Do your results elaborate on previous findings, or differ significantly?
In our stent example, if previous literature found that stents were an effective line of treatment for patients with partially blocked arteries, you should explore why your results are different in the discussion. Did your methodology differ? Was your study broader in scope and larger in scale than the previous studies? Were there any limitations to previous studies that your study overcame? Alternatively, is it possible that your own study could be incorrect due to some difficulties you had in carrying it out? Think of your discussion as telling the story of your research.
Finally, remember that your discussion is not the time to introduce any new data, or speculate wildly as to the possible future implications of your study. However, considering alternative explanations for your results is encouraged.
Many writers confuse the information they should include in their discussion with the information they should place in their conclusion. One easy way to avoid this confusion is to think of your conclusion as a summary of everything that you have said thus far. In the conclusion section, you remind the reader exactly what they have just read. Your conclusion should:
Your conclusion should NOT:
An appropriate conclusion to our hypothetical stent study might read as follows:
In this study, we examined the effectiveness of stent placement in patients with partially blocked arteries compared with non-surgical interventions. After examining the five-year medical outcomes of 19,457 patients in the greater Dallas area, our statistical analysis concluded that the placement of a stent resulted in outcomes that were no better than non-surgical interventions such as diet and exercise. Although previous findings indicated that stent placement improved patient outcomes, our study followed a greater number of patients than the major studies previously conducted. It is possible that outcomes would vary if measured over a ten or fifteen year period, and future researchers should consider investigating the impact of stent placement in these patients over a longer period of time than five years. Regardless, our results point to the need for medical practitioners to reconsider the placement of a stent as the first line of treatment as non-surgical interventions may have equally positive outcomes for patients.
Did you find the tips in this article relevant? What is the most challenging portion of a research paper for you to write? Let us know in the comments below!
This is the second in a series of articles that discusses the different databases used in literature review and how they are important to your research and writing. Part I discussed the reference databases that are geared toward the life sciences and related fields. This article discusses those that are used in the field of medical research. The following is a list of literature review databases for medicine. There are many such databases available; however these are the most commonly used in the medical and related research fields.
PubMed Central (PMC) has more than 16 million citations from science journals that date back to the 1950s. This archive is maintained by the US National Library of Medicine (NLM) at the National Institutes of Health. It covers peer-reviewed papers from biomedical and life sciences journals. NLM was legislatively mandated to maintain biomedical research information. While NLM maintains this information in printed form, PMC maintains it digitally. This is a free-access database.
PMC is not just a database of references. It houses full articles from journals all over the world. Journals that wish to participate in this public-access forum are reviewed for technical accuracy and the quality of their digital files.
Although the goal is to maintain free access to these articles, some copyright restrictions still apply. Be sure that you comply with those rules when using information from PMC.
CINAHL Complete, part of the CINAHL suite, is the best research tool for those in nursing and related fields. This database provides full-text articles from more than 1,500 journals. Its index also covers 5,000+ journals. The information includes more than 5 million records and written text published from 1937 forward.
For those in the nursing, health care, and related fields, CINAHL Complete is a valuable research tool. The website offers additional helpful information, such as books on health care and related conference proceedings. Author affiliations are included in the reference information. Health care professionals can also further their education using the online modules from this accredited source.
MEDLINE complete is a full-text database. It provides access to more than 2,000 journals from 1916 forward. The journals included in the database are highly recognized biomedical and health publications. MEDLINE Complete is an essential resource tool for health professionals and researchers.
The subjects covered comprise disciplines such as biomedicine, bioengineering, and health policy. The full text from the MEDLINE Complete journals are unique and not found in other related databases, such as Academic Search or Biomedical Reference Collection. Standard Medical Subject Headings (MeSH) can be used to search the database.
PsycINFO is a reference database that covers published articles in the behavioral sciences. The database is maintained by the American Psychological Association. PsycINFO covers a wide range of global research in psychology and related fields, such as neuroscience and law and education.
The database contains close to 4 million bibliographies and indexes more than 2,500 journals. The information dates all the way back to the 1800s and includes books and dissertations. PsycINFO offers publications from more than 50 countries and gets published in 29 languages. It is updated weekly and touted as one of the most current databases in the disciplines mentioned. Each record is reviewed for accuracy before being included in the database. Regular updates allow for increasing its ease of use and indexing features.
As you plan to begin your literature search, visit the website for each of the mentioned databases. Some, such as PubMed are open access; others provide free trials but require a subscription after the trial is over. Use the free trials and any tutorials to your advantage. They will help you begin your searches and instruct you on the best strategies to refine them.
What medical or related field of study does your research encompass? Which reference resource do you use and why? If different from those listed here, please provide a link to it for other readers.
The abstract is in many ways the most important part of an academic paper. Peers and reviewers alike decide whether or not they will continue to read an article based on the abstract. It is important that academic writers choose the appropriate type of abstract for their study that will present their work concisely, informatively, and in a way, that draws reader’s interest. While there are many styles to choose from, including descriptive, indicative, and informative, in this article, we will look at structured abstracts, way to write them, and the importance of using this format.
Traditionally, an abstract is written in a format much like an executive summary–it consists of one paragraph of continuous writing in narrative form. The abstract provides the readers with a summary of the research objective, methods used, results obtained, and conclusions. You may be familiar with abstracts like this:
Recent evidence has suggested that maternal mortality rates can be affected by hospital facility organization and design, including process design. The present study aims to investigate the role of process design in decreasing maternal mortality rates. This survey used a statistical analysis method performed by collecting data from 45 hospitals in the greater Orange County area between 2005 and 2008 which was during the time that new process design was introduced to half the target group with an aim towards reducing maternal mortality. Analysis found that improved process design in the treatment of hemorrhaging birthing mothers reduced maternal mortality by an average of 15%. Based on the findings of the current study, it seems that hospitals can improve patient outcomes by revisiting and improving their process structure and designs.
This format is known as an unstructured abstract. However, in the mid-20th century, the scientific community began looking for a new abstract format that could fit more information into the same amount of space, and the structured abstract was developed. Structured abstracts are generally favored by medicine-related publications, as they help health professionals quickly choose clinically relevant and methodologically valid journal articles.
Both types are used today and you should always follow the journal guidelines when you writing your abstract.
Structured abstracts assist the reader in quickly understanding the findings of a study and unlike unstructured abstracts, are divided into clear sections with distinct headings. These headings typically consist of at least objective, methods, results, and conclusions. Unlike unstructured abstracts, structured abstracts do not require you to write complete sentences. Let us use the example of unstructured abstract above and see what it looks like as a structured abstract:
Objective: To investigate the role of process design in reducing maternal mortality.
Methods: 45 hospitals were surveyed and data were collected in greater Orange County between 2005 and 2008. SPSS regression analysis was performed. The analysis period coincided with the introduction of a newly designed process for treating hemorrhaging in birthing mothers.
Results: The analyzed process was found to have reduced maternal mortality an average of 15%.
Conclusion: Hospitals may improve patient outcomes by redesigning their processes.
Note that each section is outlined clearly with a heading and the writing style is condensed. The reader is able to easily skip to the most relevant portion of the article and decide whether she or he wants to keep reading.
Some structured abstracts may request additional information such as background, design, participants, independent and dependent variables, limitations, and so on. These can be included as separate headings or information within the above applicable categories.
Even with the elements of the abstract laid out in front of you, it is often challenging for a writer to summarize their thoughts clearly and succinctly. One way to write your structured abstract is to break down each category into a question.
BACKGROUND: What’s the latest knowledge on the issue? Some key phrases to use here are: recent studies/although some clinical research has established x, the role of y is not well known.
OBJECTIVE: What did you want to find out? Some key phrases to use here are: This study examines/To ascertain/To identify/To understand
METHODS: How did you go about finding it? What type of methodology did you use? A quantitative study/a randomized controlled study/a qualitative survey/a literature review/a double blind trial
RESULTS: What did you find? What data or outcomes did you observe? You can use phrases such as X was observed due to Y. Do not be vague! State exactly what you found.
CONCLUSION: What did your results tell you? Did you find out what you wanted? Why or why not? What should be studied next? Use phrases such as X was statistically significant, Variable A has a negative correlation with Variable B, etc.
One pitfall to watch out is describing what your paper says rather than repeating what your paper says. The abstract should highlight the most important information you found out–it’s a very brief and informative summary. It should not be a teaser! Avoid phrases like “data is analyzed using a method discussed in the paper”, “the significance of the study is discussed” or “based on the results abc, conclusions are drawn.” Instead, state clearly “a double blind study was conducted” or “the results of the study show that oral administration of glucosamine can have a statistically significant impact on diabetes management.” Finally, always follow the specifications of the journal you are writing for and choose the format most appropriate for your study.
Do you prefer structured or unstructured abstracts? What challenges do you encounter in writing abstracts? Let us know in the comments!
Coherence is an essential quality for good academic writing. In academic writing, the flow of ideas from one sentence to the next should be smooth and logical. Without cohesion, the reader will not understand the main points that you are trying to make. It also hampers readability. Cohesion necessarily precedes coherence. There is a difference between the two terms: cohesion is achieved when sentences are connected at the sentence level, whereas as coherence is achieved when ideas are connected. In addition, cohesion focuses on the grammar and style of your paper.
Coherence also means “clarity of expression” and it is created when correct vocabulary and grammar are used. After all, the goal of writing is to benefit the readers. Without both coherence and cohesion, the readers may detect choppiness in the text and feel as if there are gaps in the ideas presented. Needless to say, texts without coherence are difficult to read and understand. It defeats the whole purpose of writing, which is to relay ideas in a clear and efficient manner. There are strategies that you can use to ensure coherence and cohesion in academic writing.
Paragraph coherence and cohesion results in paragraph unity. To ensure that your paragraphs have unity, there are two things to keep in mind: it must have a single topic (found in the topic sentence) and sentences provide more detail than the topic sentence, while maintaining the focus on the idea presented. The paragraph below shows a lack of unity:
Non-cohesive sample: Dogs are canines that people domesticated a long time ago. Wolves are predecessors of dogs and they help people in a variety of ways. There are various reasons for owning a dog, and the most important is companionship.
Cohesive sample: Dogs are canines that people domesticated a long time ago, primarily for practical reasons. Even though dogs descended from wolves, they are tame and can be kept in households. Since they are tame, people have various reasons for owning a dog, such as companionship.
Notice that the ideas in the non-cohesive sample are not arranged logically. The sentences are not connected by transitions and give the readers new ideas that are not found in the topic sentence. Thus, the paragraph is hard to read, leaving readers confused about the topic. On the other hand, the cohesive sample has ideas arranged logically. All ideas in this sample flow from the topic sentence. In addition, they give more details about the topic while maintaining their focus on the topic sentence.
It is important to focus on coherence when writing at the sentence level. However, cohesion smoothens the flow of writing and should be established. There are various ways to ensure coherent writing:
Academic writing is improved by coherence and cohesion. Without coherence and cohesion, readers will become confused and eventually disinterested in the article. Your ideas then become lost and the primary objective of writing is not achieved.
There are six ways for creating coherence, which you will find useful while polishing your manuscript. Creating coherence is not as difficult as it seems, but you will need the right tools and strategies to achieve it.
Academic writing should be concise, coherent, and cohesive. Maintaining these three qualities involves using a number of strategies to impart ideas to the reader. After all, that is the whole point of any type of writing.
In the first part of this series, we highlighted the challenges and takes from authors and journals on image manipulation.
So who’s really in charge of making sure that journals do not publish manipulated images? Mike Rossner, for one. While at JCB, he discovered that “about 1% of accepted papers had manipulated images that affected their conclusions; another 25% had some sort of manipulation that violated guidelines.” After Rossner left JCB, he founded Image Data Integrity (IDI), which identifies biomedical data manipulations.
Although Rossner concurs with Bik that some image manipulations may not be strictly unethical, he counsels caution, since something as innocuous as “clean[ing] up unwanted background in an image” can have unintended consequences. “[W]hat may seem to be a background band or contamination may actually be real and biologically important,” Rossner warns, “and could be recognized as such by another scientist.”
Outsourcing image screening to a company like IDI is one approach to identifying altered scientific images. Another solution is that of Germany’s EMBO Press, where in-house image detective Jana Christopher is a full-time image screener. “I check to see if micrographs, photographs, and data … are duplicated or illicitly manipulated,” Christopher explains. “I use tools … [that] allow you in a semi-automated procedure to adjust the contrasts and settings to highlight flaws. It’s a simple process, but you need to know where to look. For example, it is much harder to spot when the blank background on a gel has been cloned—it takes an experienced eye to spot those patterns. I’m very aware of the limitations. If someone really tries to cheat and they cheat well, it’s unlikely that we would see that. But most of what we find are genuine mistakes, which we prevent from entering the scientific literature.”
The forensic tools Christopher and others rely on include Photoshop droplets, as well as Adobe Bridge and ImageJ. Droplets are particularly useful when screening for adjustments of light or dark areas and comparing two images. Bridge is suited to rapid processing of large image batches, and the highly versatile ImageJ freeware can produce quantitative scans of gel bands and display multiple images simultaneously.
As the technology for detecting image manipulation expands and improves, so will the technology for manipulating images. These softwares allow reviewers, editors, and scientists to identify lapses in image integrity, but are they also giving rise to unprecedented scientific misconduct?
“Research error and misconduct have probably always existed,” Bik points out. “Even scientific luminaries such as Darwin, Mendel, and Pasteur have been accused of manipulating or misreporting their data.” The low resolution of older illustrations likely made manipulation more difficult to discern, but Bik still believes that “the widespread availability and usage of digital image modification software in recent years may have provided greater opportunity for both error and intentional manipulation.”
In Bik’s solution, authors take responsibility: “One possible mechanism to reduce errors at the laboratory level would be to involve multiple individuals in the preparation of figures for publication. The lack of correlation between author number and the frequency of image duplication suggests that the roles of most authors are compartmentalized or diluted, such that errors or misconduct are not readily detected.”
Should authors follow Bik’s model for conducting rigorous due diligence before submitting papers? Would involving multiple scientists in fact-checking increase or decrease the likelihood of error?
Should journals and publishing houses conduct due diligence against image manipulation internally? If so, should peer reviewers be part of that initial process, or should specialist editors screen for manipulated images in accepted papers only?
Or do you most trust a company like IDI to provide impartial, high-quality screening and detection services?
Please share your thoughts in comments! Then take a moment to find out more about how journals respond when a problematic image does make it into a published paper and how to guard against scientific misconduct in your own work.
Still not convinced that manipulation of images always constitutes research misconduct? Click on to get tips on how to edit your images ethically, acceptable and unacceptable categories of image manipulation, and specific tips and techniques for keeping on the right side of science.
A digital library serves as an online archive of information and can be one of the most useful tools for the researchers. One such tool was CiteSeer that was launched long back but has been relaunched with some added features, as CiteSeerX. Let us have a quick look at these.
In 1998, the academic search engine CiteSeer went public, changing the landscape for online research. It offered autonomous citation indexing for the first time, to the researchers in the fields of computer science and information science. When a scholar searched for an author name, keyword, or journal, CiteSeer would return relevant results for the search term. These results were not only drawn from full-text publications but they also reflected every known instance when that term appeared in bibliographic citations. From the outset, CiteSeer was able to crawl both Adobe and HTML files. It was a revolutionary technology and set the groundwork for future online access tools such as Google Scholar.
CiteSeer had its drawbacks, however. For one thing, it could only index papers that were already available online to the public: either papers that authors had submitted directly to CiteSeer, or papers that authors had published on their own websites. Another challenge was its popularity and growing scale. CiteSeer’s infrastructure was not equipped to handle 1.5 million searches every day or to index three-quarters of a million documents. To address concerns like these, CiteSeerX was launched in 2008.
Here are some of the things CiteSeerX can do for you:
Create a personal collection of articles and citations.
Receive automatic notifications of new citations relating to a paper you’ve saved in your user profile, as well as notifications of new papers that are relevant to your past searches and accessed articles.
Personalize searches and save favorite search settings.
Automatically share articles via social media, and of course you can submit your own articles to the CiteSeerX digital library.
By 2015, CiteSeerX was making more than five million articles on computer science and information science available. These articles were available to the approximately one million unique online patrons of its virtual AI library, processing millions of searches of every day. By 2017, its holdings had shot up to more than seven million documents, adding two hundred thousand new scholarly papers every month.
Its inability to crawl publisher metadata when processing searches remains a concern (it is still limited to uploaded submissions and open data sources such as author websites), and CiteSeerX has provided direct links to try your query at other citation indexes, such as the DBLP Computer Science Bibliography and AllenAI Semantic Scholar.
Nevertheless, in 2010 it was voted the #1 online information repository worldwide. Inarguably, the CiteSeerX citation index plays a significant role in the scientific community. Already a cornerstone of information for computational and information sciences, it has begun expanding its reach to include articles and citations related to areas such as economics and physics as well.
Try a CiteSeerX search and please let us know about your experience in comments! Have you had better or worse luck with alternative access tools such as Web of Science or SciELO? Would you consider submitting your scholarly writing and research for CiteSeerX bibliographic indexing, and why or why not?
In 2009, researcher Hwang Woo-Suk was convicted of research misconduct that included embezzlement and unethical procurement of human eggs. Among his less widely reported ethical violations, however, was the manipulation of images to show negative staining for a cell-surface marker.
In 2013, readers of Cell discovered duplicate images in a paper by reproductive biologist Shoukhrat Mitalipov. In 2016, a Pfizer cancer researcher Min-Jean Yin was fired for duplication of Western Blot images. Similarly, a Portuguese scientist Sonia Melo lost her grant funding for the same reason. Mitalipov and Melo insist their duplications were only due to sloppiness and that their conclusions are still reproducible.
Microbiologist Elisabeth Bik is the authority on image integrity in scientific publishing. In her 2016 exposé, “The Prevalence of Inappropriate Image Duplication in Biomedical Research Publications,” she acknowledges that “inaccuracies in scientific papers have many causes,” sloppiness is one among them. Some misrepresentations “result from honest mistakes while others are intentional and constitute research misconduct, including situations in which data is altered, omitted, manufactured or misrepresented in a way that fits the desired outcome.” Bik assigned problematic images to five categories: simple duplication, duplication with repositioning, duplication with alteration, cuts, and beautification.
Cuts and beautification (the latter of which can assist readers afflicted by color blindness) don’t always constitute research misconduct. Duplication almost always does. Whether that misconduct is intentional or accidental, the experiments based on flawed findings are invalid and papers citing manipulated images must be retracted. That’s why the majority of the scientific community agrees that journal editors or peer reviewers have an important role in identifying data integrity issues before publication.
Yet even after the Hwang and Yin scandals, many such violations of image integrity are discovered by readers after publication, partly because discerning unethical image processing can be very difficult. As one PubPeer commenter notes, “It is so easy to cheat … without leaving traces …”
Many journals have implemented screening procedures, with the Journal of Cell Biology (JCB) and its then–managing editor Mike Rossner leading the pack in 2002.
Journals have struggled with the responsibility, though. As former Science editor Donald Kennedy complained in 2006, “We are … considering the kinds of special attention that might be given to … high-risk papers … [including] more intensive evaluation of the treatment of digital images … [But] the experience will be time-consuming and expensive for the journal and may lead to conflict with authors.”
We still lack a shared rulebook for image integrity. The Council of Science Editors (CSE) assigns responsibility to authors—recommending that they disclose alterations even when data is not misrepresented—and to journals, the CSE provides links to guidelines such as those of Rockefeller University Press. These guidelines prohibit enhancing, obscuring, moving, removing, or introducing any specific features within images but permit some adjustments to brightness, contrast, color, and groupings. However, neither the CSE nor the Office of Research Integrity has implemented any universal, required standards for scientific publication of images.
Even some scientists are reluctant to accept strict guidelines on data integrity. In 2014, the Committee on Publication Ethics shared a concern from the managing editor of a scientific journal: “Many laboratories consider photographs as illustrations that can be manipulated, and not as original data. Thus gels are often cleaned of impurities, bands are cut out and photographs of plant material only serve to show what the authors want to demonstrate, and the material does not necessarily originate from the experiment in question.”
The editor emphasized scientists’ resistance to journals’ attempts to protect against research misconduct in image processing: “When the editor-in-chief rejected such a manuscript, a typical response was: I am surprised by the question and problem you pointed out in our manuscript. I checked the pictures you mentioned and I agree that they are really identical. But please be reminded that the purpose of these gel pictures was only to show the different types of banding pattern, and the gels of a few specific types were not very clear, so my PhD student repeatedly used the clearer ones. This misleading usage does not have an influence on data statistics or the final conclusion.”
So, who is really in charge and what measures can be taken from both ends? Stay tuned for our next article discussing the measures and techniques to detect image manipulation in scientific publishing.
What do math geniuses, scientific visionaries, economic gurus, legal scholars, leaders in government, business mavens, and the most brilliant minds in academia all have in common? No matter how skilled and informed they are in their fields, they all have to struggle with the technical requirements of the research paper or research report. Almost all the times, they come across the terms “appendix” and “annex”. In particular, researchers and academics at every level and in every field find themselves confused by the annex. What is an annex, why is it important to researchers, and how is it different from an appendix? Lets have a quick look at the annex in your research paper.
Many researchers are more familiar with the appendix than with the annex. Like the annex, the appendix is a supplement or attachment to a research paper but is not part of the body of the paper. It contains information that helps readers understand the thesis or it provides essential background on the research process. However, this information is too long or detailed to fit into the main text. Such information could include complex sets of graphics or tables, for example; or it could take the form of long lists of raw data, such as population figures.
An appendix is a kind of annex. In other words, every appendix is an annex, but not every annex is an appendix.
Clearly, the terms are closely related. In practice, however, we can make some general distinctions between the two.
You may be wondering whether you really need to understand the distinction between annex and appendix, as long as you’re attaching all the supplementary material that your research paper requires. Indeed you do! Depending on the academic or publishing style guide you’re working with, you may be required to stylize an annex differently from an appendix. Their indexing, page numbering, attachment to a research paper etc., are some of the aspects that may be different for an annex and appendix.
Now let’s look at some specific examples of appendices and annexes.
Whatever be your field or specialization, if you are doing and appropriately documenting serious research, you must make strong, informed use of the annex and understand how it differs from the appendix. See an example of how annexes are used, and let us know your questions and personal experiences in comments.
Annex is an important aspect of a research paper. For more information on other sections of a research paper, you can check the structure of a research paper. Also make sure your manuscript has covered all the sections and is ready to be submitted to a reputed journal.
When writing your research paper, keep a list of your sources of information. You must ensure that any information you use from other sources is properly cited and referenced. This means that you must “acknowledge” the source in both the text with a citation and at the end of the paper in your references. This is important! You want to avoid plagiarism at all costs. Plagiarism simply means that the writer has used someone else’s work without giving that person proper credit. This is a serious offense. It can result in either not being published or being withdrawn after being published. It can result in disciplinary action and will most certainly have an effect on your credibility as a researcher. Enago Academy has several excellent articles on plagiarism that will help you understand and avoid it.
Here, we provide information on how you must handle citations of your sources and some of the common formats.
There are three types of citations:
The Timber Wolf was once a great predator throughout North America.5
The Timber Wolf was once a great predator throughout North America (Smith, 1970).
The first example is from American Medical Association (AMA) style guide; the second is from the American Psychological Association (APA) style guide.
If you are writing a paper to be published in a journal, the author guidelines will provide you with the style to be used. If you are writing a paper for a class, your professor will provide you with that information. APA, MLA, AMA, and Chicago Manual of Style are the most commonly used styles in academic writing.
When you provide references, you provide some assurances that you have done your research. The reader will be better able to assess whether your information is valid. This is important to your credibility.
You need not cite every piece of information that you use, but you should become familiar with the rules outlined here. These apply to all sources, including newscasts, websites, and even television and radio programs.
There should be some balance between cited materials and original thoughts; however, this will also vary by discipline. For example, if you are reviewing a piece of art, your paper will have few citations. Most of the text will be your opinion. On the other hand, your paper on your research study will have a great number of citations that show examples of or back up your findings. In all cases, all cited material should be discussed, and all major points should be supported and cited.
Here are some basic rules for when to always cite another’s work. Remember that it is better to err on the side of caution. When in doubt, add the citation.
“Experienced riders instinctively understand the body language of their horses. Body language is extremely important because it’s how horses communicate. Trainers spend hours and hours doing basic groundwork, which ultimately translates to the saddle. This is a must for any rider to be able to have a good and safe relationship with her horse.”
A horse’s body language is very important to understand. Those who have been training horses for years know and understand this. Groundwork with your horse is a basic necessity to be able to understand his body language. This work will transfer directly to your riding and will make the experience safer.
The best way to begin is by looking for main points in the text and discuss them in your own words. Define any technical terms for the reader.
Example from above:
Knowing how to interpret your horse’s body language is extremely important for your relationship and safety.
Example (need not be cited):
St. Paul is the capital of Minnesota.
Example (should be cited):
St. Paul is the capital of Minnesota and has the highest rate of multiple sclerosis in the state.
Although it is common knowledge that St. Paul is the State Capital, it is considered statistical information that it has the highest rate of multiple sclerosis in the State. This fact must be cited.
Keep in mind that academic institutions in different countries might have different rules for citing sources. Become familiar with your institution’s rules to avoid any confusion when writing your paper.
You have several sources of references and many also have several references within them that you want to use as well. How do you cite these? Share your thoughts in the comment section below!
As we saw in the first article of this series, punctuation is important because it helps your reader understand the writing clearly. Research papers often contain complex ideas and long sentences. Hence, proper punctuation within these sentences is very important. It helps to strengthen your arguments and remove any potential confusion your readers may have. In this article, we will discuss some more about punctuation in research papers and review some common punctuation marks: the semicolon, the colon, and quotation marks.
It rained all day; we were cold, wet, and miserable standing outside.
In this sentence, the two parts that are separated by a semicolon could each be a separate sentence. For example:
It rained all day. We were cold, wet, and miserable standing outside.
These two clauses are also closely related to one another–presumably, the people in this sentence were “cold, wet, and miserable” standing outside due to the rain. How about this example?
It rained all day; I thought about having chicken for dinner.
While these clauses are also independent, the two ideas aren’t really related. Generally, the weather doesn’t impact our choice of “dinner”. It would be inappropriate to use a semicolon in this case.
Use a semicolon to join two independent clauses when the second clause begins with a conjunctive adverb (however, therefore, moreover, furthermore, thus, meanwhile, nonetheless, otherwise) or a transition word (in fact, for example, that is, for instance, in addition, in other words, on the other hand, even so).
Our results indicated a correlation; however, the correlation was weak.
The conference was attended by Senator McCaskill, Finance Committee Chair; Senator McConnell, Party Whip; and Mr. Ivan McGregor, a lobbyist from the telecommunications industry.
Here a semicolon separates the names in this list because the names are followed by commas to indicate each person’s job description. How about this sentence?
The plants can be found in Michigan; Illinois; and New Jersey.
This example is incorrect because the sentence does not contain any commas separating the items in the list. How could we fix it?
The plants can be found in Detroit, Michigan; Champagne, Illinois; and Trenton, New Jersey.
Scientists remain puzzled by this outcome: prior research suggested that it should have been impossible.
Note that the distinction here between a semicolon and a colon is a bit confusing. One good way to remember the difference is that in the above example, the use of a colon emphasizes the second clause. In our previous example of “It rained all day; we were cold, wet, and miserable standing outside”, our “cold, wet and miserable” condition is closely related to the “rain”, but the two parts of the sentence have equal emphasis. In other words, if you wish to emphasize the second clause, use a colon. If you want the two clauses to have equal weightage, use a semicolon.
In this paper we examine four major concepts: agenda setting, problem definition, collaborative problem-solving, and policy design.
In her impassioned speech before the court, Mrs. Ginsburg argued that sex, just like race, was an intrinsic and unchangeable quality and should therefore be treated the same in discrimination cases: “Just as I cannot change my race, and it is apparent to everyone around me, I cannot change my sex, and it is a primary identifying characteristic. If the court has established that race is not to be discriminated on these grounds, why is sex any different?”
According to Johnston, “Cats that are not fed once every three hours may exhibit needy behavior.”
People say that women who do not marry up to the age of 40 are “old maids.”
According to Keystone, “Members of Parliament were reluctant to disclose instances of accepting bribes, stating ‘we cannot be certain which payments were legitimate and which were not’.”
Note that quotation marks should not be used for ordinary emphasis on words. For example:
Incorrect: Our market sells fresh “corn.” (misplaced emphasis)
Correct: The New York Times said that our corn is “the freshest in town.” (direct quote)
In this article, we discussed the punctuation marks such as semicolon, colon and quotation marks. In the next article of the series, we would be discussing some more punctuation marks such as apostrophe, and confusion between hyphen, em dash and en dash.
What are the other punctuation marks in research papers that challenge you? Do you have more questions about semicolons, colons, and quotation marks? Please share your thoughts with us in the comments section below.
Research ethics is the essential code of conduct that governs academic research. It is a set of norms that define acceptable behavior. Unethical behavior often affects academic publishing. For instance, researchers may publish falsified data. However, many groups are now promoting research ethics. So how does a one maintain ethical conduct in academic research?
Ethical research first requires honesty. This means that researchers should not falsify or misrepresent data. Each researcher must clearly report their data as is including the methods and results even if they are not favorable. They should not at any point change the data in order to deceive colleagues, funders, or the public.
Linked to honesty is objectivity. Studies must be designed to minimize bias. Researchers must also actively avoid bias in data analysis and interpretation as well. Any personal or financial interests should be disclosed along with the research. This will alert readers to any potential influences that may have affected your work.
In addition, all animals used in the research must be properly cared for. Experiments should be designed well. This means that the design must be statistically sound. This will help researchers to use only the number of animals that is necessary. A thorough literature search should be done to avoid repeating animal studies. It is wasteful to experiment on animals if conclusive published data exists.
When humans are the subjects of research they must be treated well. Every effort must be taken to minimize risks and maximize benefits. At every point, the rights to autonomy, privacy, and dignity must be respected. Special care must be taken when working with vulnerable populations.
Some believe that bad researchers behave unethically. The alternate theory says that misconduct happens because of external factors. These include the pressure to publish or win grants, incentives, or constraints. Misconduct can also occur because of poor supervision, career ambitions, or the pursuit of fame. Every researcher will face pressure at one time or another. What is the best way to ensure that they do the right thing?
Ethical conduct is essential in inspiring trust. When scientists abide by research ethics, their work is trustworthy. Academic research institutions often wish to encourage their staff to behave ethically.
Institutions can promote ethical behavior by having formal and informal research ethics education. Formal education will expose researchers to ethical standards and policies. Using real-world examples can teach researchers about the importance and consequences of alternate responses to an ethical dilemma. Public discussions in an ethics course may discourage unethical behavior. This happens because participants talk about the potential harm that can result.
Institutions should do a few things to teach faculty and students research ethics.
Fortunately, there are ethical guidelines available for various disciplines. For example, HEART has issued an ethics statement for publishers. (HEART is a group of editors of major cardiovascular journals). Medical laboratory staff can learn from the American Society for Clinical Laboratory Science’s code of ethics. Professors may adhere to the American Association of University Professors’ professional ethics statement.
Research ethics can be a very tricky subject. Ethical conduct is essential to researchers being trustworthy. Many institutions are now promoting research ethics. Academic research and academic publishing only have value when researchers behave ethically. You can get detailed guidance on the ethics of working with people here. Furthermore, if you are thinking about implementing an ethics course, you can read this for more tips.
Why do you think it is important for researchers to behave ethically during their research project? Please let us know your thoughts in the comments below!
Finding the right academic journal is central to preventing the common mistake of editorial rejection of manuscripts, prior to peer review. The Springer Journal Suggester is an academic research tool that enables users to select the best-suited journal for their research. The automated process can enable journal selection from a database of over 2,600 Springer publications. The web-based semantics technology refines a list of relevant journals, based on inputs of manuscript title, abstract, and publishing model. The personalized recommendation process will search Springer and BioMed Central to find the best publication that suits the author’s choice. A refined list of potential journals can thereby assist authors to delineate a core publication for their final manuscript submission.
The web-based Journal Suggester is easily accessible, requiring only an abstract/description of the unpublished manuscript to find matching journals. When manually selecting the right journal for manuscript submission, stepwise instructions below, via Springer and BioMed Central can offer general guidance. Conversely, the online Journal Suggester automatically considers the same key points, during the process of personalized recommendations.
You can further refine the web-based recommendations tool by including the following parameters to the semantics analysis:
For transparency, the entire database of Springer Open Access journals scanned during the automated refining process is also available online.
The practice of research publication from proposal to journal article should align with best practices and codes of conduct. To begin with, therefore, publishing ethics highlight the researcher’s responsibility towards publication of the finalized manuscript. Selecting a journal via Journal Suggester depend on inputs of the unpublished manuscript’s abstract, research description, or a sample text. You can refine the results based on the defined parameters of 1) Publishing model, 2) Impact Factor and 3) Journal access. With a list of journals at hand for the manuscript of interest, the following user guide will assist in the publishing process:
i) Ask a native English-speaking colleague to review your manuscript for clarity.
ii) Visit the English language tutorial designed to assist non-native English speaking scientists.
iii) Use a professional language editing service to help you refine your manuscript.
Springer journals conveniently present a list of Springer Videos for user-friendly assistance on its online platform and on journal selection. When Journal Suggester provides a list of target journals, SpringerLink journal tutorials can guide the selection of your final choice. Automation offers a fast-track process for busy scientists to select a journal best suited for their research with ease. If you are keen to publish fast, Journal Suggester provides the option of deciding the ‘maximum time to first decision’. To strengthen your readership, it is possible to select open access exclusively during the journal refining process. After choosing the journal of interest, it is beneficial to identify a second and third choice of interest as well. This provides a broader range of alternatives for consideration should the first attempt at publication fail.
This automation process of Journal Suggester is beneficial overall for fast-paced and cutting-edge research publications. However, the portal’s limitations would be its influence on broader research; for example, additional experiments could increase the publication’s research impact. Furthermore, the manual process of browsing journals may provide you first-hand experience on relevant journals, albeit time-consumingly. The expected outcome of the automated Journal Suggester is to minimize editorial rejection of manuscripts prior to peer review. Overall, the benefits of this web-based academic research tool appear to outweigh its potential limitations.
Recently, Enago Academy launched Open Access Journal Finder (OAJF) that aims at enabling research scholars to find open access journals relevant to their manuscript. OAJF uses a validated journal index provided by Directory of Open Access Journals (DOAJ) – the most trusted non-predatory open access journal directory in its search results. Moreover, the tool displays vital journal details to the scholars including publisher details, peer review process, confidence index (indicates similarity between matching keywords in the published articles across all journals indexed by DOAJ), and publication speed.
Have you used the Springer Journal Suggester to identify a suitable journal for your manuscript? Please let us know your thoughts in the comments below!
Peer review of academic research is at the heart of publishing. It is important that this process is not tainted by reviewer bias. Two popular modes of review exist. In single-blind peer review, the authors do not know who the reviewers are. The reviewers know who the authors are. In double-blind peer review, neither authors nor reviewers know each other’s names. Single-blind peer review is the traditional model. However, both models exist to eliminate bias in peer review.
At the start of 2017, the Institute of Physics (IOP) gave authors the option to choose double-blind peer review. This option was available for Materials Research Express and Biomedical Physics & Engineering Express. Over the first seven months, 20% of authors chose the double-blind peer review option. Authors from India, Africa, and the Middle East were most likely to request the option.
IOP data indicates that more papers received rejections under the double-blind model. About 70% of papers received a rejection in the double-blind peer review process. On the other hand, only 50% of papers received rejection under single-blind peer review. The difference could be due to reviewers assuming that authors requesting this option had written poor papers. It could also be due to reviewers acting more objectively. However, authors in the double-blind trial were satisfied and felt it was the fairest approach.
Bias in peer review is a real problem. There have been many studies showing that women and minorities are less likely to get published, funded, or promoted. This bias can be both conscious and unconscious. Within scientific publishing, this means that fewer women are asked to review papers. It also means papers by women are cited less. There are two peer review models where identities are hidden. Which is more likely to get rid of bias?
The 2017 Web Search and Data Mining conference provided a good opportunity to experiment this theory. In Computer Science, papers often appear first (or exclusively) in peer-reviewed conferences. The program committee decided to randomly split its reviewers into two groups. One would serve as double-blind peer reviewers. The other as single-blind peer reviewers. The experiment would help decide which approach might have more bias.
The authors found that there were differences between the review groups. All reviewers had access to paper titles and abstracts. Based on this, reviewers indicated which papers they wanted to review. The single-blind reviewers requested to review 22% fewer papers. Single-blind reviewers were also more likely to choose papers from top universities or IT companies to review. They were also more likely to give a positive review to papers with a famous author.
Single-blind reviewers have access to the authors’ names and institutions. The study indicates that author institution had a significant influence on single-blind reviewers’ decisions to bid for a paper. There was no detected bias against female authors for this conference. A metareview combining this conference’s data with other studies indicated that there was a significant bias against female authors.
The Web Search and Data Mining conference experiment show that single-blind reviewers use information about authors and institutions in their reviews. It could be that this information is helping the reviewers make better judgments. It could also be that this is putting work from non-prestigious institutions and authors at a disadvantage. Two papers of equal value may be rated differently by single-blind reviewers based on who wrote the paper.
On the other hand, double-blind peer review provides a false sense of security. Well-known authors can be easily identified by the nature of their work. The paper may also make reference to previous work that they published. There may be other clues as well, such as a preference for a technique or compound. This means that, even without the names, reviewers can figure out who wrote a paper. It would, therefore, be better to tell the reviewer who wrote the paper and ask if there is a conflict of interest.
The actual process of removing author information to hide identity fails 46-73% of the time. The problem isn’t identifying the author. The problem is whether reviewers have a prejudice against authors from a certain country, race, or gender? While the focus has mainly been on reviewers, very little discussion exists about biases of editors. Editors, after all, have the final say.
Peer review is part of the academic research cycle and it is clear that there is bias in this process. Reviewer bias often affects women, minorities, and researchers from non-prestigious institutions. In order to try and fight this problem, journals use blind peer review. However, single-blind peer review gives the advantage to well-known authors. Double-blind peer review may not actually eliminate bias, hence researchers feel that it is better to switch to open peer review.
What is your opinion towards both single and double-blind peer review? Do you think double bind peer review is any better than single-blind peer review? Or do you think it is time to switch to open peer review? Please share your thoughts with us in the comments section below.
When we think of research articles, most of the time we think of articles that present the results of studies that took a long time to complete. Generally, these articles contain theories, testable hypotheses and extensive methodological justifications for conducting analyses. There are, however, many other types of research articles that are published in scientific journals. One of them, a perspective article, presents an important topic, groundbreaking research, or a different view of an existing issue by an expert in that field of research.
Most of the research articles published by academic journals are original research articles. Journal editors tend to prefer this type of article, especially if it presents important advancements in a research field, or counterintuitive results. Other types of research articles include book reviews, case reports, editorials, interviews, commentaries, profiles, and interviews, and perspectives. Each journal ultimately decides, based on their field specialty, what types of research articles they wish to publish. For example, some social science journals (Comparative Political Studies) do not accept perspective research articles, while others refer to them as letters.
Perspective research articles have an important role in the academic research portfolio. They stimulate further interest about presented topics within the reader audience. They are different from other types of articles because they present a different take on an existing issue, tackle new and trending issues, or emphasize topics that are important, but have been neglected, in the scholarly literature. In some scientific fields they bridge different areas of research that the journal publishes, while in others they bring new issues and ideas to the forefront. In general, their role is to enlighten a general audience about important issues.
While the incentive system of academic tenure and promotion emphasizes publication of original research, writing other types of articles is also beneficial for the researchers in the long run. It gives researchers the opportunity to contribute to their discipline in different ways, while at the same time enhancing their own professional work.
A perspective article is a way for young researchers to gain experience in the publications process that can be often arduous and time consuming. It can be a way in which they learn from the publication process while they are working on their original research articles that often take years to complete.
In the case of experienced researchers, writing a perspective article provides them at least two distinct benefits: first, it allows them to step back and reflect on a significant issue that they may know a lot about, but that they have never had the time to address. The second benefit is that the researcher gets the opportunity to give their own authorial voice to a published article that will reach a wide audience.
Before one decides to write and submit a perspective research article to an academic journal, it is important to become familiar with the article expectations of the target journal.
Although academic journals hold a similar definition and purpose of a perspective article, there are differences in the technical requirements each journal has. When it comes to the length of the perspective article, some journals have strict limitations while others allow articles to vary the length within a given range. For example, some academic journals in the field of biological sciences and medicine have a limitation of 1,500 and 1,200 words respectively, with defined reference and figure limits. Another journal in the same field has a less restrictive limit of 2,000-4,000 words and a more generous reference limit.
With respect to the structure of the perspective article, journals define their expectations in different terms. Some journals place an emphasis on the structure of the article, requiring sections such as the abstract, introduction, topics and conclusion. Other journals make suggestions on the nature of the title and the specific conceptual connections in the assigned field. Some journals take the time to explain their view and expectation in writing perspective articles, make suggestions and provide lists of things to include and avoid in the perspective article.
Writing a perspective article can have many benefits to authors. Although writing one is less demanding than an original research article, it is recommended that an aspiring author consult the targeted journal for requirements. This will ensure that the journal expectations are met, and that the author has a positive first experience in the writing of this type of research article.
Have you had the experience of writing a perspective article? If yes, then what are the keypoints you kept in mind while doing so? Please let us know your thoughts in the comments section below.
Efforts to create open scholarly communications are an ongoing process within academic publishing, amid conflicting views on open access models. In this context, publications directly made open via the publisher are considered Gold open access, while those made available via institutional repositories are Green open access. However, a majority of scholarly records still remain behind paywalls, since publishers hold the ownership to intellectual property in journals.
Inevitably, a third and more controversial model, pirate black open access, has gained precedence over the green and gold models of open access. Sci-Hub is a website that infamously enabled pirate black open access to the usually paywalled academic journals. The site gained admission through institutional proxies to bypass publisher paywalls and permit broad public access to academic articles. Although academic publishers have sued the website for blatant piracy, some researchers consider this approach to be an effective and necessary means of civil disobedience. The controversy has, therefore, prompted many traditional academic publishers to reconsider their existing models of gold and green open access.
A recent Op-ed on Wiley articulates the multifaceted problem, to ask hard-hitting questions that evaluate the drawbacks of the conventional models.
To begin with, Sci-Hub has impeded conventional open access frameworks, by illegally enabling nearly all scholarly literature as freely accessible. Comparatively, stakeholders in scholarly communication have only managed to enable access to select research articles, while the rest remains behind paywalls. Statistically, a large cohort of publications will remain unavailable via legal channels for most people in 2017. As an example, when the music industry was rife with pirates, the business model needed a revolutionary change to attract users beyond free downloads. These efforts succeeded with the advent of iTunes and other streaming options targeting pay-to-download services. This concept has direct implications for the existing green and gold open access models since they appear conventionally inept at present.
The existing models are far from the revolutionary business frameworks required in the present intellectual landscape. The ideal open access model would have >80% of the market share, mitigating the pirates, to create a sustainable platform. While certain platforms such as BioMed Central, PLoS, and ArXiv are sustainable, the share of all articles remains marginal. The concept of academic piracy or “guerrilla open access”, however, can be progressive, driving development and intellectual curiosity. How then must remodeling occur to upgrade the conventional models and overcome their inherent flaws, while also allowing for a revolutionary change?
A recent report on research consulting identified at least five stakeholders and roadblocks, requiring elimination to deliver 100% open access. This indicates significant amounts of change at the level of each stakeholder; therefore, the change appears almost implausible. For instance, one stakeholder, the author, must coordinate green open access with the journal of interest to avoid conflicting publishing models. Due to the lack of effort at the single stakeholder level, just 13% of Spanish researchers published green versions. Conversely, more researchers enabled full-texts on ResearchGate regardless of simultaneous copyright infringement.
Since the number of new articles published in gold open access is less than 20%, authors do not prefer this model. To transfer between green and gold open access, the following stakeholders are required to initiate change:
Furthermore, the changes require concerted execution, thereby withholding internal workflows, leaving the conventional open access models unaltered thus far. Perhaps the airline industry could offer the next clue to resolve this academic conflict.
Incidentally, the airline industry model is not entirely different to the multi-stakeholder, global, and policy-bound environment of scholarly communications. Air travel too has evolved from its traditional model to produce an unbundled product with additional costs beyond the core service. Unbundling allowed low-fare airlines to expose extras to the market/passenger, for a cost-effective strategy based on the passenger’s decision. Similarly, a large number of services in scholarly communication can become unbundled. To achieve basic levels of open access, publishers could freely offer basic read-only services first, while seeking revenue from surrounding services. This would qualify as cost-free open access, requiring only one stakeholder, the publisher, to deliver the change.
However, beyond the unbundled basic free service to readers, what other services may separately provide an income for scholarly communication? It is possible to provide unbundled options to stakeholders for select services. For authors, these could include peer review management, copy-editing, and language services for refined context. Readers could get access to downloadable citations tools, alerting services, and online analytics. Librarians could be provided with metadata for catalog databases and funders offered reports by subject area. All these are possible avenues for unbundled revenues. Eventually, the publishing industry too can democratize scholarly communication, based on what works, much like the airline industry does. The concept will result in increased readership, stimulating the market for services and enabling new budgets. The niche audience can further increase revenue via digital advertisers targeting a scholarly demographic.
In the “age of digital disruption”, the conventional structure of green and gold open access remains inflexible, requiring a revolutionary change. Pirate black open access has recognized this need to create a revolutionary, albeit illegal change in response. It is, therefore, time for academic research publishers to follow the course and alter the stakeholders view to achieve the necessary and legal changes.
In order to facilitate this process among stakeholders, initiatives such as CHORUS have implemented smooth and streamlined processes. Similarly, Springer-Nature is experimenting with the concept of free-article sharing via SharedIt. Furthermore, unbundling the product would allow all content to be freely available, thereby allowing publicly funded research to gain an increased readership. The concept of unbundling, starting with the publisher, would present stakeholders the choice to pay for individual benefits of communication. This concept would be more sustainable, less expensive, and exponentially less controversial than the ongoing versions, in the end.
Presenting your research in a conference or professional meeting symposium is a prestigious accomplishment. In a symposium talk, you present a review of your research and demonstrate how it contributes to the overall symposium topic. Preparing a symposium-based research talk is a major undertaking. However, after presenting symposium talks, many researchers move on to other tasks. All that is left of their effort and contribution is a note on their curriculum vitae that they were a speaker. The scientific content is lost to the memories of the participants and audience. However, by publishing your symposium-based research talk you create a permanent record of your participation. Through publication you also reach a wider target audience than those that were physically present. To foster your publication record, it is good practice to commit your spoken words to writing. With a little more effort, you can publish a paper as well as participate in the symposium.
A symposium-based research article is a formal document that summarizes the information presented during a symposium at a conference or professional meeting. It typically is a mini-review of a research topic, especially that of a single author or a principal investigator.
Many journals publish symposium-based research articles. There are some publishers who specialize in these types of articles. However, the pathway to publication generally follows two forms: proceedings and independently submitted articles.
The symposium or conference organizers may decide to collectively publish the information presented. This is done in a format called a proceeding. Proceedings report the content of symposium talks in a collection of papers which may take up an entire edition of a journal. It is the responsibility of the organizers to solicit and collect manuscripts from the speakers and to deliver them to the publisher. The decision to publish a proceeding is generally made before the symposium convenes. Authors should be notified at the time of invitation that they would need to produce a manuscript after the meeting. In these cases, it is best to organize the talk with ultimate publication in mind.
If the organizers do not plan to publish the symposium in a formal proceeding, you can still publish your talk. In this case, you (the author) will be responsible for locating a suitable journal to submit your manuscript. Many, but not all, journals accept these types of papers. Some journals publish mini-reviews which are a suitable format for your symposium-based research articles. If you are invited to participate in a symposium in which the organizers do not plan to publish proceeding, you should begin exploring how and where you can publish your talk as you develop it; plan ahead. Basically, your paper will be a mini-review of a research topic. This is an excellent means to further your publication track record and reach a wider audience. Note, it is considered unethical to submit a manuscript for publication before participating in the symposium.
The specific format will be determined by the journal to which you are submitting your paper. Unlike a normal review, the symposium-based research article is much shorter in length and limited in scope. The length will be determined by the journal with 3,000-6,000 words being typical. Your paper should include tables and figures if appropriate. Generally, the paper will follow a review format and have the following or similar sections:
Symposium-based research articles are based on lengthy, in depth symposium talks, not the typical short 10-minute journal papers given at meetings. The latter normally present a single experiment/project which is often not finished or published.
As a researcher, you should always seek out and accept opportunities to participate in conference symposia. These are excellent ways to reach your audience and further your career. The talks you give are also opportunities which can lead to publications. As you develop your presentation, think about how to get a publication out of your efforts. Begin planning and writing your symposium-based research article as you prepare your talk.
Have you attended a symposium where you presented your research? Please let us know your thoughts in the comments section below.
There is a constant rise in the number of articles published in predatory journals. Young, inexperienced researchers are the main target of a growing group of dubious publishers that is willing to accept almost any manuscript (regardless of the quality or authenticity) for a fee. These supposedly academic companies do not offer any services, such as peer review or archiving, and have no problem in publishing low-quality papers if the authors pay for the same. Their websites are usually unstable/poorly designed and the articles they publish are not indexed by Medline or similar databases.
According to a survey that was carried out at the beginning of the year, researchers working in developing countries (those with insufficient funds, poor research infrastructure, and limited training) are more susceptible to submitting their work to predatory journals. The idea of getting something published quickly can be quite appealing to some researchers, and receiving invitations from journals or having their papers accepted easily can give them a (false) feeling of success.
A recent study published in Nature shows that researchers from wealthy nations also fall prey to predatory publishing. David Moher, an epidemiologist at the Ottawa Hospital Research Institute in Ontario, Canada, and several colleagues spent 12 months analyzing almost 2,000 articles from about 200 suspected predatory journals. They found that more than half of the corresponding authors came from high- and upper-middle-income countries and that many articles had been submitted from institutions in the United States. Interestingly, the US National Institutes of Health (NIH) was frequently named as one of the funding agencies.
The authors point out that “the problem of predatory journals is more urgent than many realize.” In their study, they also assessed the quality of papers published in those journals and found that most experiments could not be reproduced or evaluated properly because of missing information. Additionally, only 40% of the studies carried out on humans and animals mentioned something about seeking approval from an ethics committee, whereas in regular journals, such approval is reported for more than 90% of the animal and 70% of the human investigations.
Based on their results, Moher and colleagues estimate that at least 18,000 funded biomedical research studies end up in dubious, obscure, and poorly indexed journals. These publications do not advance science at all as they are usually of low quality and are also difficult to locate.
An evaluation of over 1,900 papers published in potentially predatory journals (based on Beall’s list, which was taken offline at the beginning of this year) showed that the corresponding authors of all such publications mainly originated from India (27%), the United States (15%), Nigeria (5%), Iran (4%), and Japan (4%). However, to understand these numbers, it is important to consider the total scientific output per nation (last year, the United States produced about five times more biomedical articles than India and 80 times more than Nigeria).
Kelly Cobey, a publications officer at the Ottawa Hospital Research Institute in Canada (and one of the authors of the Nature study), is in charge of educating researchers and guiding them in their journal submission. She also helps them identify and avoid predatory journals. Unfortunately, many research institutions do not have staff members with similar roles, so what else can we do to stop the plague of predatory publishers in academic publishing?
One thing is clear: we must act immediately! To start with, it is important to tell the public what these dubious publishers are doing and warn authors (especially the inexperienced researchers) about the consequences of publishing their work in shady journals. Funding agencies, research institutions, and reputed publishers should work together to issue clear warnings against illegitimate journals and introduce recommendations on publication integrity.
Moher, Cobey, and colleagues also suggest that funders and research institutions should increase the amount of money available for open-access publishing, ensure that researchers are able to identify questionable journals and prohibit the use of funds for submitting papers to predatory journals. They should also monitor where exactly all the grantees and staff members publish their funded work (developing automated tools to achieve this would be immensely valuable).
Manuscripts published in predatory journals should not be considered for granting promotions, appraisals, tenure, or subsequent funding. Moher et al. even suggest that scientists wanting to advance in their careers or looking for research funding should be asked to include a declaration that they have never published in predatory journals (and that they do not intend to do so). Publication lists could then be checked against the Directory of Open Access Journals (DOAJ) or the Journal Citation Reports, the researchers say.
Were you aware of the concept of predatory publishing? What measures need to be adopted to create awareness about predatory publishing? Please share your opinion by commenting in the section below.
Following from ‘Five Tips for Writing a Good Rebuttal Letter’, we revisit the theme of manuscript resubmission to academic journals. The initial feedback from editors and reviewer’s about one’s work can trigger a variety of reactions based on its analysis. While authors seek positive feedback in general, the more realistic expectation is to address the reviewer’s requests for revision. Methods of writing a rebuttal letter can determine if manuscript revision is likely to be successful or a futile attempt at resubmission. Should the editorial outcome be negative with equally critical referees, the recommendation is to provide an appeal letter first. However, authors who receive positive feedback can revise in compliance with comments, and submit revisions along with a rebuttal letter.
A rebuttal letter offers authors an opportunity to address reviewer’s concerns directly, defend aspects of work, and eliminate contextual misunderstandings. This stepwise breakdown of writing a rebuttal letter aims to assist authors during the revision to ensure grant of appeal.
Acknowledge the reviewers time, comments and expertise. Thanking the reviewers sets a positive tone to begin with, providing the basis for an ongoing amicable exchange. Do not insinuate reviewer bias or incompetence. Prudent statements from the author cannot result in a positive re-evaluation of the work.
Acknowledge any misunderstandings on your part including a poor presentation that may have led to reviewer’s confusion. Do not imply reviewer incompetence or lack of expertise in the phrasing of your rebuttal. Be clear, avoiding ambiguous and blank statements.
Respond to each reviewer’s individual comments, by copying the full text within your rebuttal letter. Strive to keep answers brief, succinct and well versed. Explain how you intend to revise the concerns either experimentally or editorially. Do not plead for reconsideration based on lack of funding as one of the reasons surrounding your inability to complete key experiments. Original scientific articles require the full spectrum of research, and the inability to meet reviewer requests experimentally is not viable.
If data required is available as a supplementary article, which the reviewer may have missed, explain this in your rebuttal for clarity. If you are unable to address a point raised in the reviewer comments, explain your reasons for evasion. Do not blatantly ignore reviewer comments, while selectively answering a few.
Often authors receive feedback on their manuscript from the editorial and reviewers as ‘Major’ and ‘Minor’ comments. If reviewer comments deviate from the typical format, categorize the comments provided relative to your work, as major and minor:
The five key opinions stated above, point authors in the right direction of writing an effective rebuttal letter. However, a few considerations remain to refine and wrap-up the final framework.
A quick guide sheds further light on the process of preparing your rebuttal letter in response to reviewers. Researchers can also seek support externally, to integrate a straightforward review and response process.
Have you drafted a rebuttal letter? What format did you follow? Please share your thoughts in the comments section below.
Researchers frequently spend time finding the right journal after their research work is over. ‘What should be the best journal to publish my academic research?’ is one of the most frequently asked questions. The objective to publish research is to communicate the findings in the most appropriate journal after undergoing the peer review process. This not only helps in acknowledging your contribution to the particular research field but also in advancing your academic career, providing opportunities for research collaborations and grants.
According to the STM Report 2015, more than 34,000 research journals exist, growing at a rate of about 3.5% per year. According to an estimate, more than 1,000 new journals were launched in 2014 alone! With the rise in predatory and hijacked journals, the researchers can get easily duped. These numbers can make journal selection a daunting task. Moreover, making a submission to a wrong journal can increase the likelihood of rejection at the submission stage or in the peer review process. Therefore, to simplify this selection process, authors need to look at characteristics and competitive factors associated with the journal and also their preferences related to publishing time, type of peer review process, and more.
Through this ebook, we intend to provide effective tips and tricks to early-stage researchers and students to help them navigate through their search for the right journal for publishing their research. We have attempted to compile a step-by-step guide and provide relevant information related to journal characteristics, competitive factors, types of journals and articles, open access publishing models, and more.
Here’s what you can expect from ‘How to Find the Right Journal for Publishing’.
Journals and academic institutions have significant roles to play in cases where academic fraud and research misconduct are suspected. When journals suspect academic misconduct from researchers, they should alert the corresponding institutions. Journals should not investigate such cases; institutions should. The NIH defines research misconduct as “fabrication, falsification, or plagiarism in proposing, performing, or reviewing research, or in reporting research results.” While the definition of research misconduct is straightforward, the rate at which it occurs is not as easy to pinpoint.
A study has shown that 1.9% of scientists admitted to falsifying data, while up to 33.7% admitted to using questionable research practices. In terms of admission rates by colleagues, 14.2% admitted to falsification, while 72% admitted to using questionable research practices. There are many types of scientific misconduct, and the scientific community itself becomes less credible with each instance. Academic fraud is committed when researchers purposefully copy others’ work, are dishonest about their work, fail to attribute or behave inappropriately in relation to the suspected misconduct. It may be hard to agree upon how research misconduct should be dealt with. Therefore, the Committee on Publication and Ethics (COPE) has created certain common guidelines for journals and institutions.
In instances of research misconduct, journals and institutions must work together in identifying the root cause of misconduct. The COPE flowchart suggests ways by which journal editors should handle misconduct. Former COPE Chairperson, Dr. Elizabeth Wager, has proposed the following guidelines based on the COPE flowchart:
Journal editors become aware of possible misconduct through a number of sources- peer reviewers, the authors’ colleagues, etc. There have been a few instances when research misconduct was detected and acted upon by journals. For instance, Patrice Dunoyer, along with a plant biology group headed by Olivier Voinnet, had eight papers retracted from Science, Plant Cell, The EMBO Journal, and several others. Investigations by the French National Centre for Scientific Research (CNRS) and the Swiss Federal Institute of Technology (ETH) uncovered multiple instances of image manipulation in the year 2015. Because of this, EMBO banned and suspended Voinnet and also had his award revoked. In addition, Dunoyer was temporarily suspended from the CNRS.
To uncover academic fraud, journal editors use a number of strategies. For instance, academic institutions and journals regularly use iThenticate; a tool for detecting plagiarism. The following is a partial list of other strategies used for detecting research misconduct:
A group of medical journal editors founded COPE in 1997, aiming to create a forum to discuss publication misconduct. The COPE created a set of guidelines because there was a lack of protocol for addressing research misconduct. The guidelines developed by COPE focus on how institutions and journals should respond to research misconduct.
The COPE guidelines for research are divided into nine sections: study design and ethical approval, data analysis, authorship, conflicts of interest, peer review, conflicts of interest, redundant publication, plagiarism, duties of editors, and media relations. COPE also provided guidelines for dealing with research misconduct. These guidelines are meant to guide researchers and editors in avoiding research misconduct and dealing with possible cases of academic fraud.
Although these guidelines exist, policies should still be instituted by journals and institutions regarding research misconduct. However, a study found that only 54.8% of journals have policies in place to deal with fraud. It is essential for all journals and institutions to devise policies on research misconduct. The scientific community loses credibility whenever ethical misconduct occurs. Furthermore, publishing articles that contain falsified information also prove detrimental to the advancement of knowledge. In summary, research misconduct should be identified diligently and dealt with strictly.
Should law enforcement arrest researchers for committing fraud and wasting taxpayers’ money? Should they be fined heavily and barred from getting subsequent research funding? Please share your opinion by commenting in the section below.
Fraudulent research has taken a turn for the worse, this time in South Korea. At least 82 academic papers published by South Korean researchers have been identified as fraudulent. Apparently, South Korean researchers have been naming their own children or other relatives as co-authors in order to improve their chances of admission to top universities. The national government recently uncovered this dubious practice of kids being named as co-authors. The scandal will undoubtedly have a negative impact on the reputation of several renowned research institutions, including Sungkyunkwan University, Yonsei University, Seoul National, and Kookmin University. In fact, a total of 29 universities in South Korea are affected.
The problem of child authorship was first discovered a year ago. The fraud became apparent that a researcher at Seoul National had included his son’s name as co-author on dozens of published papers. According to the Korea Herald, the kid, who had not even completed his highschool, has authored as many as 43 papers of his father.
We continue to see new ways of taking short-cuts around publication ethics. The rush to make an impact in science is often at the cost of integrity and quality. Sloppy work and outright fraud have taken many forms: manipulated or fake images, modified data, plagiarism and other problems that have caused many articles to be retracted. Until now, phony or underage authors were unheard of, which has changed with the recent news from South Korea.
The South Korean scandal is one more indication that the “publish or perish” problem has escalated. It is no longer enough simply to publish one’s work. In today’s academia, career advancement depends more and more on publishing breakthroughs, rather than solid but less exciting replication studies. The increasing pressure to publish new findings is being felt by researchers globally.
On top of the enormous pressure to publish new findings, there is an added problem in South Korea. This has to do with the faulty higher education system in that country. The national goal is to make university education available to everyone. As a result, its quality has deteriorated. University education is no longer enough for young people in South Korea to gain recognition and employment. In order to succeed they must attend the very best schools in the country if their degree is to be meaningful. The competition is fierce, which partly explains why admissions fraud is a problem in South Korea and why some parents feel they are helping their children by including them as co-authors on papers.
Should child authorship be an accepted practice? While it is not that unusual for related individuals to work together and appear as co-authors on a paper, the best research programs and journals have strict rules about disclosing these relationships. It is highly unusual for a person without credentials (without a PhD or similar degree) to be named on a paper. The young people whose names appeared on the South Korean papers were high school age or even younger. At the very least, the research proposal as well as any paper submitted for publication must clearly indicate if there is any researcher involved who is below the age of 18.
The South Korean story is far from over. The journal Nature reports that the investigation is ongoing. Over the next several weeks, the investigators will cross check the family names of as many as 76,000 South Korean researchers against article citations in major databases. The goal of the investigation is to determine how many more articles already published might include kid authors. The investigation will focus on papers in well-known citation databases: Science Citation Index (SCI), Web of Science, and Scopus.
The scandal comes at a crucial time for South Korea, when it is striving for global recognition as host of the Olympic winter games. The good news is that the government has acted quickly. We can only hope that the investigation is thorough and helps prevent this sort of fraud from happening again.
The South Korean academic community and researchers all over the world have the responsibility of monitoring their colleagues and keeping high standards of publication ethics. Why not take the time now to review the basics of ethical research and publishing? Please share your thoughts with us in the comments section below.
Peer review is widely recognized essential to ensure the quality of research that is published in academic journals. There are nearly as many different ways to conduct peer review as there are journals. Among these, the traditional blind peer review process is considered the best way to produce open and honest critiques of research. However, recently, a US court in California has made a ruling forcing a journal publisher to identify the anonymous peer reviewers of a scientific article for the first time.
The article in question,by researchers at Ohio State University, was published in 2013 in The Journal of Strength and Conditioning Research. The study comprised of physical and physiological changes of around forty volunteers who participated in a ten-week CrossFit Inc. training regimen. The study reported that 16% of participants left the program due to injury. The paper, which has since been retracted, was challenged in public and in court. CrossFit Inc. alleged that the publisher, the National Strength and Conditioning Association (NSCA) of Colorado Springs, in Colorado, deliberately skewed the results. The publishers exaggerated the injuries associated with CrossFit’s training program. According to CrossFit, the publisher did so because CrossFit is its competitor in the fitness business. CrossFit requested that both state and federal judges force the publisher to reveal the names of its editor and reviewers. After several judges refused, a state judge finally granted the order.
This is not the first time a company has requested the identities of peer reviewers be revealed in a court. The New England Journal of Medicine has fought off subpoena requests from companies, usually pharmaceutical manufacturers, many times over the years. In one notable ruling, a judge concluded that “the batch or wholesale disclosure by the NEJM of the peer reviewer comments communicated to authors will be harmful to the NEJM’s ability to fulfill both its journalistic and scholarly missions.” In other words, the confidentiality aspect of peer review is crucial for maintaining the quality of journal articles and scientific scholarship in general. Until now, this has been the universal position of the courts.
Scientists, academics, and publishers alike are worried about the potential impact of this decision. In blind peer review, neither the reviewers nor the authors know one another. This allows reviewers to give their true opinion of research without fearing consequences from negative reviews, and ensures their opinion of an author does not influence their review of a paper. At the same time, it helps the author to accept criticism as unbiased and fair. Publishers fear that the ruling could make scientists unwilling to review draft manuscripts. However, Joshua Koltun, a lawyer who reviewed the case, said that it will not necessarily create a legal precedent for the future. He believes that courts will continue to balance the need for reviewer confidentiality and the needs of individual legal cases. A defendant from the CrossFit company stated that their intention is to “incentivize legitimate science, and punish scientific misconduct.”
Perhaps this ruling is a sign that it is time to reconsider the potential downsides of blind peer review. The process of peer review is already under scrutiny within the scientific community. The traditional peer review process is being replaced by newer models like post-publication peer review. Several factors are contributing to this change, but the Internet is probably the largest. Traditional peer review is a time-consuming process, but the rise in online journals and other publishing outlets has increased the demand for quality articles. As time pressures mount, peer reviewers may not be able to provide quality analysis while keeping up with their deadlines. In addition, it may be difficult to hide an author’s identity completely, leading to questions about the true “blindness” of the process.
Following the replication crisis in science and medicine, the Open Science movement has grown in popularity. This movement encourages scientists to share their data, with the ultimate goal being enhanced collaboration and greater transparency. Post-publication peer review allows scientists anywhere in the world to review a published manuscript. As it makes review an ongoing process, it allows academics and scientists to refine their work and identify opportunities for corrections. It also allows for greater engagement of the scientific community with an author’s work and helps break the grip of private publishers, who restrict the availability of their articles to paid subscribers.
No matter what happens in the wake of the CrossFit case, it is clear that peer review is in a process of ongoing re-evaluation and change. Perhaps it will lead to a push for greater openness and transparency and better science in the end.
What do you think of the traditional peer review process used by journal publishers? Are you in favor of blind review, or of other methods like transparent peer review? Please let us know what you think in the comments.
The current American political climate is proving difficult for the people there. One of many significant issues is the effect of Trump’s rule on science. Scientific funding and the availability of scientific information has become more limited nowadays. Each reduction in funding and visibility has the potential to affect science and public health in a different way. Trump’s anniversary has highlighted several changes throughout the field of science and research, besides research funding. Let us explore some of them.
The ability of scientific researchers to speak publicly about their work has declined, and gag orders are becoming common. While Trump is not the first president to issue a gag order, the context in which he has issued them is unusual. Multiple government agencies have been instructed to halt communication with the public for a certain amount of time. Although most of these were rescinded, the federal Office of Special Counsel reminded employees that blanket gag orders are illegal.
The prominence of federal scientific advisory committees has also been decreased under the current administration. Over two hundred committees exist to provide guidance to government agencies. Membership to these committees has dropped, partially due to restrictions on who can participate. The Department of the Interior (DOI), Department of Energy (DOE), Food and Drug Administration (FDA), and other agencies have been prevented from advising officials on scientific policy.
Scientific information has been removed from many public government websites under the Trump administration. The Centers for Disease Control (CDC), for example, has been instructed to remove words such as “science-based” from its pages. Without these statements, it can be difficult for the public to understand where health recommendations come from. People are less likely to follow recommendations if they don’t understand them, so the removal of this information could have a detrimental effect on public health.
Politics and science in America, separate in theory, have long been connected in practice. Emphasis on scientific principles fluctuates between presidential administrations. The use of solar panels at the White House over the last few decades has varied depending on whether the president considers them helpful. Government policies concerning scientific matters often change within administrations as well. One of the most recent examples of this is attitude about climate change, or lack thereof.
This lack of long-term governmental consensus has caused uneven application of scientific principles to policy making. Politics often affects what lawmakers consider to be objective scientific evidence, because people naturally search for sources that confirm their own views. More than once in governmental institutions, policy makers have shown an unwillingness to alter their viewpoints beyond political ideology. This devotion to ideology is the cause behind some of the changes demonstrated in scientific policy making.
These inconsistencies in policy making have contributed to a distrust of science in the American public. Scientific findings are often considered matters of opinion. They are not the result of concerted attempts to engage in unbiased critical thinking processes. Changes in political priorities which affect scientific policy making are commonly seen as evidence that science itself has no objectivity. This attitude of science being non-objective is not new, but it has gained prominence under Trump’s administration.
Many scientists and scientific organizations are concerned by these policy changes affecting science and research. Individual scientists are speaking out in large numbers. The Union of Concerned Scientists, Scientific American, and the international scientific journal “Nature” have all expressed concern that restriction of scientists will stifle innovation. Some scientists have spoken out in defiance of the gag orders, and many are engaging in the political arena. Multiple scientists are also running for Congress.
Public knowledge of science has declined under the Trump administration, bolstered by the emergence of “fake news.” Without the scientific information government sites previously provided, the American public has become vulnerable to sensational stories they can no longer debunk easily. The scientific information available on government websites is viewed with increasing skepticism. This skepticism has contributed to a number of health crises, including preventable disease outbreaks.
The science and research policies of Trump’s administration have had a demonstrably negative effect. It is not possible at present to determine what the long term effects will be, but analysis of the past and present may provide some direction. Restrictions in scientific information have previously caused disease outbreaks, environmental disasters, and other human health hazards. Hence, these steps are also not considered to bring some very good outcomes.
How do the Trump administration’s policies continue to affect science and public health? Is separation of science and politics possible in the modern era? Please share your thoughts with us in the comments section below.
Reproducibility has been a standard of good scientific research for many decades. The reason for striving to design a study that can be repeated and produce the same results is quite simple. Other researchers must be able to recreate the results you have observed if these results are to be considered science.
Despite the recognized importance of reproducibility, it is estimated that only about 40% of recently published science can be reproduced accurately. Reproducing scientific experiments has become more problematic for several reasons:
As reported in a 2016 survey, scientists who have failed to reproduce the published outcomes of other researchers’ work could have improved the reproducibility by certain measures. The top three ways to increase reproducibility in science would be: better understanding of statistics, better mentoring and supervision, and more robust experimental design.
Some scientists are questioning whether reproducibility is, in fact, a good indication that the outcome of a project is reliable and has explanatory power. If the hypothesis or experimental design was flawed or biased to begin with, repeating this work does not give us useful outcomes.
Scientists are all taught that “correlation is not causation” but this warning can be forgotten as their careers progress and the pressure to produce practical outcomes builds. Hence when reproducibility is under question, researchers, especially biomedical researchers, feel troubled.
Despite the crisis in reproducibility, the same survey found that most scientists still trust the rigor of published studies. Reproducibility is not the only mark of credible research. An approach called triangulation could serve the same purpose of testing the quality of the research.
Triangulation originally referred to the technique of using the location of two known objects to establish the position of a third object, such as a ship at sea. The term has since been adopted in the social sciences to describe a mixture of quantitative and qualitative methods. Triangulation can also be used in the physical sciences. A good description of triangulation is given by Marcus R. Munafò and George Davey Smith in their recent article published in Nature:
“We believe that an essential protection against flawed ideas is triangulation. This is the strategic use of multiple approaches to address one question. Each approach has its own unrelated assumptions, strengths and weaknesses. Results that agree across different methodologies are less likely to be artefacts.”
An example of a triangulated approach in biomedical research is a study by Surana and Kasper. The authors used triangulation to identify therapeutically useful bacteria in the human intestine.
Because triangulated studies often require several different areas of expertise, they could appear quite different in publication than typical papers in a number of ways:
A further innovation proposed a year ago by Jeffrey Mogil and Malcolm Mcleod, is the concept of the “confirmatory study”. This is a form of rigorous triangulation. In this case, the results of a scientific study are subjected to third party, pre-clinical trials prior to acceptance for publication. The research of new drug therapies could use this approach. The proponents argue that this testing would prevent costly failures in clinical trials. Only the best-designed studies would pass the confirmatory study and become published research.
Triangulation is one aspect of the overall global trend towards open research and greater collaboration among scientists. Other aspects of this trend include pre-print servers to share early results before publication and pre-registration of experimental work in advance of the actual results.
Many studies might not survive this kind of scrutiny – and that’s the whole point. The goal is to have fewer published papers of better quality. The research that does make it through the triangulation process will be better designed, produce more reliable results and possibly lead us faster to effective applications. Using different methodologies or having an outside party assess the research is helpful. This is because it forces researchers to return to study design principles and more emphasize more on hypothesis testing.
These new approaches to open science—like triangulation and pre-print servers—do not replace reproducibility. With openness in research comes responsibility and mutual testing, good practices that have always strengthened science.
Triangulation is still a new concept in the scientific community. Its definition and its implementation will continue to change. Moreover, there are a number of things that you can do to follow the discussion and contribute your own opinion in the debate.
Have you tried the method of triangulation for any study of research? If not, how far successful do you think the method will be? Please share your thoughts in the comments section below.
China’s progress has been remarkable in the field of science and research. The recent reports released by the US National Science Foundation (NSF) proves this fact even more. According to the reports released, China has produced the largest number of scientific publications in the year 2017. While doing so, it has left behind nations like the United States and European Union.
In three decades, China has moved from third in the world for producing scientific research articles to the first position, surpassing both the European Union and the United States. China’s growing economy along with the new government’s focus on science has played a major role in this new development. The Chinese government has brought about several changes for China’s progress in science. Among the changes brought about by the government, payment for publication made to the scientist needs special mention. It is an approach not even used in the EU or US. The new approach to publication has helped to net a 3,000% increase in the number of scientific publications, and the numbers are expected to rise.
Through the last part of the nineteenth and into the early years of the twentieth-century countries like Italy, Great Britain and Germany topped the lists for the most research and related publications. The United States took the lead in the early twentieth-century and remained on top for more than ninety years. China began to gain ground by the mid-1990s. In 2016, China published more than 426,000 studies, which covers 18.6% of the total documented in Elsevier’s Scopus database. On the other hand, nearly 409,000 studies were published by the United States.
For the science researcher, the payment for publication is often unmade and in some instances looked down upon. In the field of academia, private research as well as industry, most scientists produce research and the related publication as part of their primary contract. Academic and research publications pay nothing for their articles, and the peer-review system frequently means they get the best articles from the discipline. Without remuneration for publication, the primary reward to the researcher is in the publication itself, seeing your name associated with the work through a byline. For academics on a tenure-track, there is further incentive through the tenure and rank advancement process, but nothing that would necessarily stimulate publications well above the minimum set by the institution.
China was the first major industrialized nation to recognize that additional rewards would help in the production of more publications. Implementing a system where researchers are paid to publish their work in appropriate journals, China began to gain on both the EU and the US. In 2017, China passed the United States in the number of science publications having produced at least 15% more articles each year for the last two decades.
Most of the paid publications by the Chinese government are for internal journals. However, there has been a steady increase in the number of outside journals where researchers now publish. In the last two decades publication in Chinese journals has dropped from over 40% to now just over 20% of the total number of papers being published.
In recent years, ethical issues in the academic community have been investigated to help control the quality of publications in China. Since 2015 the Chinese government has been actively investigating fraud and ethical claims, with many of those coming through the China Association for Science and Technology (CAST) in Beijing. Two recent CAST investigations have focused on the Chinese “paper broker” industry. A paper broker is responsible for securing publication for a researcher, and in unethical terms. This often means the use of fraudulent sources or review processes. At least two-dozen articles from leading journals were found to have been tainted during the review process in just the first few months of the investigation. Over the last five years, this number has increased by quite many folds.
Last March, China’s Ministry of Science and Technology had carried out a survey. According to their investigation, they found more than 100 papers were retracted by foreign medical journals. All of these papers were alleged of peer review fraud. The ministry announced a “zero tolerance” policy towards academic fraud. It also announced severe penalties for authors found guilty. In fact, a few months before the Tumor Biology journal, published by Springer, had announced the retraction of 107 papers from Chinese authors. The journal believed the peer review process was compromised.
Scientific research needs funding. In 2015, United States had spent the most on research and development (R&D)—around US$500 billion. China came in second, with roughly $400 billion spent on research. However, with the US economy in trouble, expenditure on research continues to remain flat. On the contrary, China has increased its R&D spending, proportionally, in recent years. All these have resulted in the current position of China in international science. By no means would the Chinese researchers abandon the method. With impressive increases in both the number of publications internally and internationally, the Chinese system has raised expectations and outcomes. With the new guidelines and policies being put in place there will be more focus on preventing the unethical or illegal behavior. This, in turn, will result in a positive outcome for both China and the world.
What do you think researchers from other nations need to learn from China’s progress in the field of science and research? Please let us know your thoughts in the comments section below.
New York: Enago, the leader in editing and publication support services, today announced the worldwide release of Open Access Journal Finder (OAJF) that aims at enabling research scholars to find open access journals relevant to their manuscript. OAJF uses a validated journal index provided by Directory of Open Access Journals (DOAJ) – the most trusted non-predatory open access journal directory. The free journal finder indexes over 10,700 pre-vetted journals and allows researchers to compare their paper with over 2.7 million articles and counting. Seeing the positive response in the initial pilot stage, OAJF has also been rolled out in languages other than English, primarily Chinese, Japanese, and Korean.
Sharad Mittal, CEO, Enago said, “Open access means critical academic advances and breakthrough scientific theories are accessible globally and instantly.” Commenting on the launch, he added, “Countless predatory journals have slipped into the scholarly landscape, diluting the manuscripts of scholars with misleading findings. OAJF promotes research integrity by enabling accessibility to open access publishing environment that is free from predatory journals.”
Researchers looking for open access journals can now simply add their research abstract and OAJF will make use of advanced search algorithm to deliver contextual search results sorted by relevance. In its search results, the tool displays vital journal details to the scholars including publisher details, peer review process, confidence index (indicates similarity between matching keywords in the published articles across all journals indexed by DOAJ), and publication speed. The dynamic platform also lets scholars filter search results based on preferences such as peer review process and approval of journals, among others.
Sharad expressed his enthusiasm, “We are thrilled, and expect thousands of our scholarly authors to widely benefit from this open access movement.” Researchers can now find relevant research literature from multiple disciplines, and explore publication details from one convenient place without any fear of data privacy. Open access journals on OAJF are indexed from publishers ranging from 120+ countries globally. The publications cover all areas of science, technology, medicine, social sciences, and humanities. Sharad concluded, “OAJF’s mission to increase the ease of accessing open access journals is based on a true vision of creating a useful, fairer and transparent research environment for scholars.”
Since taking office, President Trump has made a number of changes to US federal policies. These changes have impacted the work of scientists within the nation. Trump’s first year in office has been characterized by moves that shocked the scientific and international communities including censorship of scientific terms relating to health, withdrawal from the Paris Climate Change Agreement, cuts to the H1-B visa program, and purges to federal scientific advisory boards. Given this track record, US government’s temporary funding budget coming to an end, leading to several services to be shut down comes as no surprise. Finally to the relief of the researchers, the US government extended the temporary funding up to a few more days. Let us learn about it in details.
The US government was operating under a temporary funding measure, which expired on 19 January. As a result the government will run out of money. By law, it would be forced to cease many government services and shutter facilities until lawmakers approve a new budget. This was due to the fact that the Congress failed to negotiate a spending plan for the budget of 2018. This could only be resolved if Congress and the White House agreed on a spending plan to pay for government operations.
However, the US government shutdown that began on 20 January is no longer taking place. The Congress has approved legislation on 22 January to fund government operations until 8 February. US President Donald Trump signed the document soon after. This short-term funding will help the government to function, providing the funds for less than three weeks. The scientific community can watch out for another show regarding the US government’s funding issue in the first week of February.
According to Glenn Ruskin, spokesman of the American Chemical Society, and Matt Hourihan, incharge of budget and policy programmes at the American Association for the Advancement of Science (AAAS), these steps disrupt the flow of science. Funding stability and predictability are crucial for proper functioning of research agencies. If the Congress and White House fail to resolve the issue, the researchers are under the panic that their work might have to be stopped. Several government employees might be affected, as well as the research being conducted in government organizations like the National Institute of Health (NIH) and NASA. The scientific community is waiting for the final showdown.
This is not first instance, as US government has been eyeing research funding for a long time. Recently a memo to the Washington Post was leaked which revealed that the US Department of the Interior is adopting a new step in the screening process of research grants. The screening process is applicable for outside grantees, including academic institutions such as universities and non-profit organizations. All discretionary grants over $50,000 must now be evaluated by Steve Howke, a previously low-profile staff member, to make sure that they “better align with the administration’s priorities.” A list of ten administrative priorities accompanied the memo. The Interior secretaries under Democratic and Republican presidents have directed federal dollars to support their priorities. However, the creation of a formal approval process appears to be without precedent within the department.
In addition, the memo came with a warning that failure to comply would have consequences.
“Instances circumventing the secretarial priorities or the review process will cause greater scrutiny and will result in slowing down the approval process for all awards,” the memo stated, in boldface.
Both current and former government officials have been critical of the move. David J. Hayes had served as the Interior’s deputy secretary under Bill Clinton and Barack Obama. Hayes currently serves as the executive director of the New York University School of Law’s State Energy and Environmental Impact Center. He expressed his concerns on this recent development. According to him, Subjugating Congress’ priorities to the Secretary’s own priorities is an arrogant, impractical and, in some cases, likely to be an illegal move. He added that these programs are governed by laws that the Congress passes.
Hayes suggested that the government contract processes are complex. The applicants expect their proposal to be reviewed with fairness, impartiality, and integrity. Political interference would sully the integrity of contracting processes that applicants have a right to expect.
The step echoes the major changes made at the Environmental Protection Agency (EPA) earlier in 2017. The Agency also instituted a political vetting process for its grants, assigning John Konkus, who officially works in the EPA’s public affairs office. He will review discretionary grants. Konkus will review every award the agency gives out, along with every grant solicitation before the award is issued. By far, the majority of scrutiny has fallen on grants related to climate change and environmental initiatives.
While it is unclear exactly what kind of impact the new vetting process is going to have on science funding, the outlook does not seem bright. The EPA’s initiative has by far canceled close to $2 million competitively awarded to universities and nonprofit organizations. Rep. Raúl M. Grijalva of Arizona, the top Democrat on the House Natural Resources Committee, has also denounced this step of the government. To him, this grant approval process looks like a backdoor way to stop funds going to legitimate scientific and environmental projects. He also added that this may result in harming scientists doing important work because they disagree with that philosophy. This is unacceptable for the scientific community.
Several months before the memo was leaked, the Interior had already canceled a $100,000 grant to the National Academies of Science, Engineering and Medicine. It originally funded a study on the health risks for residents living nearby coal mining sites. It would appear that further cuts of funding to similar initiatives are not unlikely with the new policy in place.
To avail research funding, your idea needs to be put down in a proper way in your manuscript. Make sure the manuscript is edited properly before submission to the journal.
What do you think of this policy change? Have you or your institute been affected by this policy change? Please let us know your thoughts in the comments section below.
Elsevier has been having disputes with several universities across the globe, such as Germany, South Korea and now Finland. Recently, Elsevier and South Korean universities entered into an agreement to gain access to ScienceDirect, Elsevier’s database of journals. The Finnish Consortium, FinELib, comprising universities, research institutions, and public libraries in Finland, have followed the South Korean universities. Elsevier and FinELib, jointly issued a press release outlining two exciting goals. In their three-year agreement, Elsevier and the Finnish consortium are:
Let us review the agreement between Elsevier and Finnish researchers.
This agreement of the Science Direct Freedom Collection allows 13 Finnish universities, 11 research institutions, and 11 universities of applied sciences, and grants subscription access to around 1,850 journals on Elsevier’s ScienceDirect. These journals include over 1,500 Elsevier owned hybrid-journals and over 100 full open access journals. Elsevier society-owned titles (e.g., Cell) published by Elsevier are not included. Another bonus is Elsevier’s citation impact, which is “30 percent above market average .”
According to reports, Finnish research published by Elsevier increased by 37.5 percent between 2011 and 2015. However, the total number of Finnish articles published grew by 15.8 percent during that same period. These numbers demonstrate the value Finnish scientists attach to publishing in Elsevier’s high-quality journals.
As for publishing open access, Elsevier’s Your Guide to Publishing Open Access states that open access allows FinELib-associated entities to access “published research, combined with clear guidelines for readers to share and use the content.” Gino Ussi, Executive Vice President of Elsevier, explained that the Finnish research community and Elsevier collaborated to aim at improving the already high standard of Finnish research. They wish to achieve this by paving the way for open access publishing and leveraging the full potential of Elsevier’s ScienceDirect platform. They believe this will improve the way researchers search, discover, read, understand and share scholarly research.
To encourage researchers even more, FinELib states that the Open Access agreement offers researchers a new opportunity to publish their articles. They will also be provided a 50% discount on article processing charges (APC). This discount is available for all corresponding authors in organizations that are parties to the agreement. The discount is offered for articles published in over 1,500 subscription journals and over 100 full open access journals. Researchers may check the FinELib website for more information.
FinELib and Elsevier have enriched the Finnish community by bringing all necessary resources together to succeed. Keijo Hämäläinen, the main FinELib agreement coordinator and Rector at the University of Jyväskylä, explained that Elsevier’s high-quality scientific, technical and medical research publications have significant value for Finnish researchers. It would help them to stay in competition with the scientific community globally. Continued subscription access at competitive rates has therefore been a key priority for us. Hämäläinen also stated that they are very pleased with Elsevier taking concrete steps to support our open access goals. With this new development, the Finnish research community and Elsevier are providing options for authors to publish open access.
The agreement is an important development on the dispute of Elsevier with the different universities. In case you have missed the previous updates on this dispute, you can check the infamous events that made academia headlines in 2017 (part 1).”
How beneficial do you think this collaboration would prove for the Finnish researchers? Do you think other universities should form similar collaboration with Elsevier? Please share your thoughts with us in the comments section below.
Elsevier has been in news for a long time. Initially it was due to the dispute with the German universities regarding access to journals. Recently the dispute between the South Korean universities and Elsevier has again made it to the headlines. Several disagreements over subscription prices and inclusion of little-read journals in package deals went on for months. Shortly before January 12, Elsevier agreed to allow a consortium of hundreds of South Korean universities access the database ScienceDirect through 2020. ScienceDirect boasts content from 3500 academic journals and thousands of electronic books. The agreement includes price hikes between 3.5 and 3.9 percent. Elsevier had threatened to block access to ScienceDirect unless increases of 4.5 percent were made in the accessing charges paid.
The Korean consortium cited that Elsevier was taking advantage of its market leverage by charging higher subscription rates. They also complained about the minimum flat rate system used by ScienceDirect for its package deal. It was unacceptable for the Korean consortium as the package deal included many little-read journals. Lee Chang Won, secretary general of the Korea University and College Library Association, states, that initially they have accepted whatever (rate increase) Elsevier made. But with library budgets being continually squeezed, it is no longer possible for them to afford (its) excessive demands.
Korean groups formed the consortium representing 300 university and college libraries in May 2017 to negotiate with 42 database providers. After failing to agree on the terms of usage, the group boycotted Elsevier and other publishers and refused to renew their contracts. Elsevier relented and agreed to continue access while negotiations continued.
The two sides continue to talk about pricing and other details for 2019 contracts. The universities that opt for multiyear contracts expect further concessions. Despite the lowered fees, Hwang In Sung, research analysis team director at the Korean Council for University Education (KCUE), feels that their negotiated (increase) of 3.5 to 3.9 percent with Elsevier is still more than the international level of 2 percent.
“The highest annual bill for libraries is ScienceDirect,” says Lee Chang Won. A survey taken recently showed that members of the library association spend about $140 million each year on digital database subscriptions. Of that amount, $33 million goes to Elsevier for ScienceDirect. No wonder it aimed for an increase in the subscription rates for this database. In a similar dispute with a German academic consortium early last year, Elsevier cut off access to electronic access to journals at more than 60 institutions that refused to renew their subscriptions. Negotiations are presently ongoing, and meanwhile, Elsevier is allowing uninterrupted access while the talks continue. South Koreans are also negotiating with providers that have increased rates, and they have not renewed agreements.
The future of academic publishing is uncertain, and only through negotiations between the users and the companies like Elsevier will the questions be answered about the future of academic publishing and access to journals. The South Korean consortium has staved off disaster by reaching an agreement with Elsevier for access to ScienceDirect. Singapore-based Elsevier spokesperson Jason Chan confirms that Elsevier reached an agreement with the Korean consortium for access to ScienceDirect through 2020 and has agreed to continue to discuss future access options. Beyond that, the future of access to scientific journals around the world is less than clear. These controversies might signal the beginning of the end for the traditional subscription model of academic publishing.
What do you think about Elsevier’s stand on this dispute? Do you think this would be a long-term effect? Please share your thoughts with us in the comments section below.
There is a lot of published research data now available and this makes it harder for your target audience to find your academic research. Promoting your research has therefore become very important.
To help you stand out from the crowd, you should have an ORCID profile. This will allow others to find you even if you change your name or institution. Research identifiers such as a DOI can be useful. They can be used to track the interest in your paper.
What are some other ways that you can get more people to read your research?
This is something every researcher is familiar with. If you want more people to find your work, think about your audience. Who are they? What are they interested in? Where do they work? Are they only in your department or further afield?
Once you know who they are, think about how to reach them. Where do they spend time online? What types of online habits do they have? Use this information to help you decide where to promote your work. Is Twitter with its very open platform going to reach your audience more effectively or are you more likely to find them on LinkedIn?
You could also look at the strategy other researchers in your field are using. Install the Altmetric bookmarklet in your browser. Then navigate to a paper in your field and click on the Altmetric donut. This will give you details about where members of the public are discussing the paper. Use similar channels to promote your work.
Some social media channels require a lot more investment than others. Before you begin, work out how much time you are willing to devote to the channel(s) you chose. Using Twitter means that you will need to tweet several times a day. Some of the tweets should promote and discuss your work. However, most of the tweets should not be about you. Retweet interesting tweets that your followers might like. Reply to other users’ polls and questions. Share research that interests you and tag the authors.
LinkedIn may be a better fit for your schedule if you are very busy. LinkedIn allows you to share updates on your research. It also has a blog-like feature where you can write short articles. This allows you to share your expertise. Of course, if your target audience is not on LinkedIn it would be better to choose a platform where they will see your work.
Be prepared for feedback. There will be honest questions and comments. Some members of the audience may be more interested in starting a fight or advancing their own agenda than what you have written. Remember to engage with each comment respectfully and professionally.
Online communities are great but don’t forget to interact in real life too. Is there a club or society that might be interested in your work? Volunteer to give a talk or a seminar at one of their meetings. Again, audience is key. Make sure your presentation is clear to your audience. How you present data at a conference is different from how you give a talk to people who are interested in science but have no formal training.
It can be really easy to add a link to your research in your email signature. It could be a link to your LinkedIn or Kudos profile. This is an easy way to help the people you communicate with find your work.
Create a plain English summary of each paper. This will make each article less intimidating. It will also help people decide if they want to read the full paper. Post this summary on a blog or discussion group that you belong to. If a science reporter sees it, they may contact you for an interview.
Hashtags are an easy way to find content related to a topic. Twitter is famous for using hashtags. They are basically a topic or phrase with a “#”
at the beginning. Depending on the paper, you could use #science, #microbiology, #astrophysics to categorize your work. Use Hashtagify to identify which hashtags are really popular. Using a popular hashtag makes it more likely that people will find your work.
Many journals have a social media strategy to promote articles they publish. The research office at your institution likely has a PR strategy for promoting research that includes social media, email lists, news outlets, and government departments. Speak with them about promoting your work.
It can be a little scary to promote your research at first. However, it is an essential part of attracting grants, collaborators, and students. Use Altmetrics to find out where others in your field are promoting their academic research. Choose a similar platform, being aware of how much time it will take. Use a platform like Kudos to provide a plain English summary of each of your papers. Make sure you use unique research identifiers. Get an ORCID profile to help others keep up with your research data. Above all, participate in the research community.