Return to the list of threads...
|
Forum closed. No more comments will be accepted on this forum. |
Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#3997]
POSTED ON BEHALF OF HANS BERGMANS ----------------------------------------------------
Dear Participants of the Open-Ended Online Forum:
Welcome to the second round of discussions of the Open-Ended Online Forum on Risk Assessment and Risk Management!
As mentioned in the email sent out by the Secretariat earlier, this forum discussion will focus on providing input to assist the Executive Secretary with the development of appropriate tools to test the “Guidance on Risk Assessment of Living Modified Organisms” in actual cases of risk assessment (see Decision BS-VI/12 paragraphs 1(b), 3 and 5(a) and the Annex, Terms of Reference, paragraph 1 (a)).
The objective of this round of discussions is to brainstorm on the conceptual development of tools that can be used to conduct the testing.
Please find below some guiding questions to help focus the discussion:
1.What, in your opinion comprises a tool to test the “Guidance on Risk Assessment of Living Modified Organisms” in actual cases of risk assessment? 2.What specific tools for testing would you recommend for each of the different parts of the Guidance? 3.How can these tools be modified to capture specific information on each of the three sections of the Guidance? 4.How can the tools be adapted to accommodate the broad sets of experiences from the various target groups? 5.In what form should this tool be presented to the target groups that will carry out the testing?
Moderator’s Explanatory Comments:
As the moderator of this discussion I would like to add a few comments to further guide the participants through the questions:
Q1: In line with the COP-MOP mandate (Decision BS-VI/12, below), our current discussion is focused specifically on the tools for testing of the Guidance. The tools may be in different forms and modalities and adjusted to different situations.
In this context, please present your concept of such tools in general terms for example: what kind of materials and methods do you need to be able to do a testing on “actual cases of risk assessment”?
Q2 and Q3: These questions may look very similar, but there is a discrete difference of focus.
Question 2 refers to the parts within each section of the guidance documents: broadly speaking, they are each structured to have a “Background”, “Scope” and “The Planning Phase of the Risk Assessment”, followed by a direct hands-on part on “Conducting the risk assessment”, or as referred to in section 3, “Developing a monitoring plan”. Each of these different parts may require different tools, or a different styling of the tools.
Question 3 refers to differences between the material in each of the three sections. The Roadmap in section 1 is the most detailed document dealing with the entire risk assessment process that should be applicable to any LMO. Section 2 deals with specific issues on the risk assessment of specific LMOs and traits. Section 3 deals with the development of monitoring plans. It could be expected that tools proposed for question 2 will apply primarily to the Roadmap, but that specific adjustments are needed for the testing of the guidance in sections 2 and 3.
Q4 and Q5: These questions are straightforward. I would just like to point out that you may also want to take into account the various levels of experience that may exist within the different target groups that you have identified for testing in your own situation.
I am hoping for an equally enthusiastic response as we have encountered in the first round of discussions, and am looking forward to a fruitful exchange of ideas!
Best Regards Hans Bergmans
Quotes from Decision BS-VI/12: Paragraph 1(b): “The Guidance will be tested nationally and regionally for further improvement in actual cases of risk assessment and in the context of the Cartagena Protocol on Biosafety”;
Paragraph 3: Parties, other Governments and relevant organizations are encouraged: “through their risk assessors and other experts who are actively involved in risk assessment, to test the Guidance in actual cases of risk assessment”;
Paragraph 5(a): the Executive Secretary is requested to: “Develop appropriate tools to structure and focus the testing of the Guidance”;
Annex, Terms of Reference, paragraph 1 (a): The open-ended online forum and the Ad Hoc Technical Expert Group on Risk Assessment and Risk Management shall ... provide input ... to assist the Executive Secretary in his task to structure and focus the process of testing the guidance...".
(edited on 2013-01-14 00:00 UTC by Dina Abdelhakim)
posted on 2013-01-13 23:52 UTC by Dina Abdelhakim, SCBD
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#3998]
Thank you Hans for agreeing to chair this important forum on the development of "tools" for the assessment of the RA guidance material.
"Tools" is a much used word in government. We are always seeking "tools" to achieve some end goal. To me a tool is a specific thing that allows the construction/deconstruction of something so that we either have a new thing or we understand better the thing that we have.
In this case we have the "guidance". It is not an thing (or is it) and its not even a process. The process is Annex III and at best the "guidance" is a set of footnotes to the Annex. So do we actually need tools to deconstruct the "guidance"?
The problem is that at MOP6 we talked of testing the "guidance" to see if it worked. However somewhere along the line the idea that you need tools to conduct testing was introduced to the discussion and suddenly we have the proposal that tools will be developed [by the Executive Secretary] to be used to assess the use of the "guidance" in conducting a risk assessment. After 11 or more hours of discussion at MOP6 on the draft RA decision no one was keen to challenge the need for tools and most of us decided that the forum was the place to continue this discussion.
Do you need tools to assess the use of the "guidance" in conducting a risk assessment? I would have thought that the guidance was the tool to assist in conducting a risk assessment and that the test was to question the relevance and usefulness of each part of the guidance as we try to apply it in a risk assessment. I do not believe we need anything else other than a relevant example, and the time and staff to do it.
Regards Geoff Ridley Environmental Protection Authority, New Zealand
posted on 2013-01-14 19:19 UTC by Dr. Geoff Ridley, Environmental Protection Authority
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4013]
Dear Participants, I have been thinking about the questions of the Forum and discussed them with the colleagues in the GMO office. As we understand the aim of this Roadmap is supplementation of Annex III of the Protocol: «The Roadmap is intended to facilitate and enhance the effective use of Annex III by elaborating on the steps and points to consider in environmental risk assessment and by pointing users to relevant background materials». Therefore, we agree with Geoff Ridley's opinion and believe that with the individual approach to each type of LMOs pointed in the Annex III the Road map is a tool to assist conducting a risk assessment. This, of course, a generic set of actions, but they will always be complemented with the development of knowledge about LMOs and their environmental effects. We believe that the national biosafety policies and legislations are the main controlling tools of the Roadmap. We think, that the more important thing in this context is the development of appropriate tools to analyze LMOs risks for specific types of LMOs and traits and for the Monitoring procedure (the Guidance, Part II, III), which, in our opinion, can be achieved only by continuously updating Database (Background materials linked to the Guidance on Risk Assessment of Living Modified Organisms) with the appropriate analysis techniques, development of the tools their degree of applicability assessment, competent experts' selection of the methods of analysis (individual tests) and their sorting in the Database for convenience when looking for experts in risk assessment or research planning and conducting risk assessment. Galina Mozgova, The National Coordination Biosafety Centre of the Republic of Belarus
posted on 2013-01-16 08:34 UTC by Ms. Galina Mozgova, Belarus
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4014]
I would also like to thank Hans Bergmans for chairing this very important forum. Greetings to all forum participants.
I have a couple of questions on some of Hans’ topics that I understand as being key to the development of this testing task.
Q1) I suggest two different tools: (A) Workshops and meetings; and (B) Online forum.
Q2 and Q3) We have previously discussed the development of a package that aligns the guidance document with the training manual. Since it is a goal not only to align them but also to present them together, I don’t see a reason why we have to split the guidance document into different parts using different tools to test it.
Hans mentioned the use of a real case scenario for the testing. I suggest that to focus on only one dossier is more effective than developing different ‘cases’ for the testing and compliance of the results. This would help us to compile the results and interpret them.
Parts I and III will be useful to the RA of any LMO. Part II is more specific and is divided into 4 types of LMO (stacked genes, abiotic stress, trees, mosquitoes). If we choose any of these specific LMOs for the real-case scenario we will be testing Part II.
Q4) We have mostly agreed in the previous forum that the package is aimed at anybody involved in environmental risk assessment of LMOs under the national biosafety framework. Again, I don’t see a reason why we need to adapt tools to meet the various levels of experience of testers. The guidance document and the training material already do that for us. The key question here is how to effectively include all target people!!
Q5) It depends on the discussions of all the previous questions above.
Regards
Sarah Agapito Federal University of Santa Catarina - Brazil
posted on 2013-01-16 13:10 UTC by Dr. Sarah Agapito-Tenfen, NORCE Norwegian Research Centre
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4000]
I have been thinking about these questions and discussing them with my colleagues in our GMO Office. I like to present to the Forum an overview of our comments, for further discussion.
Q 1.What, in your opinion comprises a tool to test the “Guidance on Risk Assessment of Living Modified Organisms” in actual cases of risk assessment? An ‘actual case of risk assessment’ requires an actual application, as (one of the) materials and the performance of a risk assessment, as the method. However, under Q 2 we will see that more materials are needed for a full testing of the Guidance. We first focused on doing a risk assessment in accordance with the so-called Roadmap in Part I of the Guidance. The most straightforward testing would involve an actual application for the large scale release or placing on the market of a genetically modified crop, because that is what we have most experience with, and what the Roadmap focuses on. The SCBD could propose an actual application that is probably meaningful to all testers, e.g., a Bt maize line. However, testers may use another example, if they want to. In that case, the actual application used must be publicly available. Focusing on the section ‘Performing the Risk Assessment, the person (or group of persons), let’s call him/them ‘the tester’, performing the test should in the first place follow the prescriptions in the Guidance. The test questions would be: do I understand the logic described in the steps of the risk assessment, can I locate the necessary information in the application, and can I answer the questions and draw a conclusion as required in the Guidance? The answers in the test report should not be the result of the risk assessment, but should tell us any problems of any kind encountered in the process described. An interesting question would be if the tester thinks that the method is compatible with the way risk assessment is normally performed in his legislation system. Conclusion: - Materials: an actual application for large scale release or placing on the market of a GM crop - Method: as described in the section ‘Conducting the Risk Assessment’ in the Roadmap (Part I of the Guidance).
Q 2.What specific tools for testing would you recommend for each of the different parts of the Guidance? Under Q 1 the tools for a specific case of performing the risk assessment of a well-known case of a GM crop were presented. This is the third part of the risk assessment as it is described in the Roadmap. But to our opinion the Roadmap also gives guidance for over-arching issues in the risk assessment process, and for the planning phase of the risk assessment. It will be much less easy to provide materials for testing these parts of the Guidance. At least, in our daily practice we would not find direct information on these issues in an application. The materials would be the local requirements laid down in local legislation documents and practices. The method for testing would probably be to check the availability of legislation documents, or regulatory practice. The test questions would be: does the tester get the required information in the procedures that are followed under his legislation, and does the tester understand the rationale and the logic in these parts of the Guidance. Basically, the test question would be if the methodology described in the Guidance is compatible with what is required under the tester’s legislation. Conclusion: - Materials: cannot be provided centrally by the SCBD, they would be in the legislation of each tester; the tester must indicate which materials have been used and they must be publicly available. - Method: as described in the sections ‘Overarching issues’ and Planning Phase of the Risk Assessment’ in the Roadmap (Part I of the Guidance).
Q 3.How can these tools be modified to capture specific information on each of the three sections of the Guidance? This question speaks of ‘each of the three sections of the Guidance’; this should actually be ‘each of the three Parts of the Guidance’ to be consistent with the text. For Part II: specific types of LMOs and traits, the material and methods would be the same as described under Qs 1 and 2. As there is generally less experience with these issues, it is important that the SCBD provides clear applications/dossiers for these cases. Part III is primarily on the method for establishing monitoring plans and practices. We think that the test question in this case is similar to the question in Q 2: is the legislation that the tester is working with ask for the same approach described in this part, and does the approach described in the Guidance fit in the regulatory system?
Q 4.How can the tools be adapted to accommodate the broad sets of experiences from the various target groups? We don’t think that the materials should be accommodated to the level of experience of the tester. All testers (‘experienced’ or not) should be made aware and confident that they actually help the process if they indicate where in the Guidance the areas are that are not clear, and where they need more explanation, or examples. In fact, we think that will probably be the most valuable outcome of the testing process.
Q 5.In what form should this tool be presented to the target groups that will carry out the testing? The materials should include the ‘actual cases of risk assessment’, so: actual applications. It will be more difficult to provide texts for the cases where the tester needs to refer to his own legislation. The Party that asks the tester to do the testing should in fact provide this information. Alternatively, or in addition, the SCBD might provide examples, but it must be clear to the tester that these are just examples, to help find the information in the tester’s legislation.
Boet Glandorf Risk assessor at the GMO Office Natl. Institute of Public Health and the Environment, the Netherlands
posted on 2013-01-15 11:11 UTC by Ms. Boet Glandorf, Netherlands
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4019]
Dear Hans, Dear All,
I would like to make a short comment on this issue.
One should not forget that one of the main objective of the Guidance (and in my opinion it was the primary objective) is to provide a structured framework facilitating access to other documents useful for the risk assessment of LMOs (accessible through the Biosafety Information Resource Centre and the Scientific Bibliographic Database on Biosafety). In that sense, the Guidance is not a self-sufficient document to support a risk assessment. There are plenty of Guidances, scientific papers, risk assessment reports, books... where more detailed information on generic or specific aspects of the risk assessment (including on how to conduct a risk assessment) can be found.
Therefore, I think that the testing of the Guidance should primarily answer the question whether the Guidance adequately points to the most important and relevant sources of information. The testing should include the identification of missing sources of information in the Biosafety Information Resource Centre and the Scientific Bibliographic Database on Biosafety.
Best regards,
Didier Breyer, Ph.D., Scientific Officer Scientific Institute of Public Health, Biosafety and Biotechnology Unit Brussels, Belgium
posted on 2013-01-17 08:52 UTC by Didier Breyer
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4010]
POSTED ON BEHALF OF HIROSHI YOSHIKURA -------------------------------------------------------- What tools can be used to conduct testing in actual cases of risk assessment? How can these tools be structured and presented in order to gather the most pertinent information? I understand that what MOP-6 agreed on was to check whether the currently proposed risk assessment/management guidelines are workable or not in practice. Therefore, the first step would be to ask member countries whether they have experience of the risk assessment or not. If the answer is yes, the next question will be; *how they have conducted risk assessment; *whether they have used currently proposed guidelines, or *whether they have intend to use the guidelines, and if the answer is in the positive; *how they have used or intend to use the present guidelines. The survey could contain questions as regards; *benefits of using the guidelines as well as problems identified based on the past experience (by showing examples). 1. It is important to survey, in addition, how the risk assessment is being done in the national regulatory framework. *Outline of risk assessment conducted by developers; *Outline of risk assessment conducted by regulators; *Public participation in the process (risk communication), and; *Overall framework of risk analysis, such as, organizations involved in risk assessment, those involved in risk management, risk communication arrangements, etc. and; *Manpower, facilities and cost borne by responsible bodies. 2. The above information, if available, may more concrete picture of ongoing risk assessment and management. 3. However, I note that the risk assessment and risk management are the national governments’ responsibilities. I myself doubt if any formal assessment of utility of the proposed guidelines is worthy of doing or not. The use of guidelines should be in the hands of clients. If they are used in practice, they are good guidelines. Hiroshi Yoshikura Adviser, Food Safety Division, Ministry of Health Labour and Welfare 1-2-2 Kasumigaseki Chiyoda-ku, Tokyo 100-8916 FAX:+81-3-3503-7965 Tel: +81-3-3595-2142/+-81-3-5253-1111 (2409) E-mail: yoshikura-hiroshi@mhlw.go.jp
posted on 2013-01-15 20:31 UTC by Dina Abdelhakim, SCBD
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4015]
Dear participants of the Open-ended Online Forum, This is to draw your attention to the fact that the Open-ended Online Forum on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms has indeed started on last Monday, 14 January, as announced earlier on. During the past two days you have not received the, by now well known, notification via your e-mail address from bch@cbd.int that new postings were available. This was due to a technical issue that now has been fixed. May I draw your attention to the first posting in the Forum, in which 5 leading questions were put forward, relating to the appropriate tools that theSecretariat will have to develop to structure and focus the testing of the Guidance. The Open-ended Forum has been asked by COPMOP to help the Secretariat with this task. In the first posting I have also given some comments and explanations of the 5 questions from my point of view as moderator of this discussion. I have noticed that the discussion is already getting well underway. I intend to provide a halfway overview of the discussion by next weekend. Thanking you very much for your efforts to make this discussion a success, Hans Bergmans Moderator of this discussion in the Open-ended Online Forum
posted on 2013-01-16 15:07 UTC by Mr. Hans Bergmans, PRRI
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4016]
Question one asks what tools do we need? Let’s look at a specific example.
For instance with respect to scientific information the Protocol is silent other than in Article 15 (1) “Risk assessment undertaken pursuant to this Protocol shall be carried out in a scientifically sound manner…” Here there is an assumption that any information used should be scientifically sound.
Part I of the guidance says “Information, including raw data, of acceptable scientific quality should be used in the risk assessment. Data quality should be consistent with the accepted practices of scientific evidence-gathering and reporting and may include independent review of the methods and designs of studies.”
So how do test the usefulness and utility of this statement. In itself I cannot. Part I says that the “Criteria for the quality of scientific information” is the use of “information … of acceptable scientific quality”. So what does this look like? Well, it should be “consistent with acceptable practices of scientific evidence gathering”. So how will I know that it is? “Appropriate statistical methods should be used where appropriate”. If I am in a country that has little experience of risk assessment how do I know do I know what is appropriate?
In this instance I did not require a tool to tell me that the Guidance has not provided me with any guidance, but rather it has increased my uncertainty as to what I am to do. At best all I can do as a result of reading the Guidance is gather up the ‘scientific information’ and look for someone who might know if it is quality scientific information. Given the recent publication of scientific information in high quality journals followed by its retraction I will have to take any advice I get from experts with some reservation.
As I said in my first posting “I would have thought that the guidance was the tool to assist in conducting a risk assessment and that the test was to question the relevance and usefulness of each part of the guidance as we try to apply it in a risk assessment.”
As I have shown the guidance on the criteria for the quality scientific information is of little relevance and usefulness to a practitioner. I didn’t need a tool to show me this.
Geoff Ridley Environmental Protection Authority, New Zealand
(edited on 2013-01-17 00:35 UTC by Geoff Ridley)
posted on 2013-01-17 00:34 UTC by Dr. Geoff Ridley, Environmental Protection Authority
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4017]
Dear Hans, Dear Members of the Forum My greetings and best wishes for the new year.
I must admit to being confused by the post above. I think the sentences outlined are some of the most important in the Guidance. They simply remind whomever is doing the assessment that it is based on science. In the traditions of science, findings should be presented in a way that allows them to be confirmed by independent practitioners. That means that materials should be made available as necessary and methodologies clearly described.
As with so many international guidance documents, the assessor is left to make normative decisions about the science that they accept. This is also one of those times. If a regulator senses that s/he is unsure about the quality of the data, it seems reasonable and good to seek further advice. If that expertise is not readily available, or is not trusted, then the regulator might ignore his/her uncertainty or may chose to act on it, holding approval until s/he can be satisfied. The alternative to seeking independent advice when the assessor is unable to make a determination for themselves is to conclude by default that the provider of the data was trustworthy. To me that is no less problematic than independent experts with whom you may have reservations.
So if we are to evaluate the guidance sentence by sentence, I would say that this one is good guidance.
I’m not sure what the point about retraction was either. Retraction from the scientific literature is not all that rare. Science is a human activity and humans are flawed. However, only published science can be retracted! Unpublished studies used by regulators are not accessible to independent review and thus are even less likely to have any flaws exposed by practicing scientists confirming the data. The use of blind peer review and an expectation that materials will be shared is not a perfect system, but neither is a system relying on data and materials that are kept secret.
There are different levels of Guidance. This document does not prescribe specific statistical tests. Perhaps the next AHTEG will decide to evaluate the different statistical tests that are in use, and offer advice on the minimum number of samples to be used, how long tests should go, how many times they should be independently replicated and so on, and may even attempt to validate particular statistical tests. While that might be a next step in the Guidance, the current statement on what constitutes quality can be seen as a legitimate first step in the Guidance.
The MOP decision was "The Guidance will be tested nationally and regionally for further improvement in actual cases of risk assessment..." That for me is one of the answers. It is to be tested when being used in a case-specific manner and the tools will be those that monitor the activity in a way that further improves the Guidance. Are there any regions who would be prepared to undertake this activity?
With best wishes to all Jack
posted on 2013-01-17 06:10 UTC by Mr. Jack Heinemann, University of Canterbury
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4018]
Dear Hans, Dear All,
Thank you, Hans for formulating the questions that can help us to identify the appropriate tools to testing Guidance. Thanks to colleagues for giving comments and interesting ideas.
A Regional and Subregional approach to testing I consider would be very useful. Thank you, Jack, to focusing this point. As I already mentioned in the previous on-line forum, we discussed with our colleagues in CEE/NIS countries and have agreed to initiate the RA testing at Sub-regional joint activities. It would be also useful and interesting due to similarity in development, agricultural conditions, communication (language) comfort etc. I would express interest of Moldova to hosting a Regional/Subregional meeting that will bring together the risk evaluators, researchers, decision makers, farmers, ONG and oth. I would mention Belarus, Moldova, Armenia, Georgia, Tajikistan that already expressed their interest for collaboration. We are open to invite also other countries in the region, who willing to join us.
A face to face meeting to discuss the Guidance paragraph to paragraph to see relevance and usefulness, based on a real dossier on RA would efficient. Additionally, a laboratory demonstrative applications related adverse risks in case of contained use of LMOs, detection etc. would be also useful. A field trip to agricultural/natural areas and communities would help to understand usefulness of the Guidance regarding wild relatives, co-existence, socio-economic issues of communities etc.
This approach will be helpful to consolidate efforts and will contribute to better understanding for the procedure of RA, will bring together the relevant people of the countries of the region in providing RA for LMOs, based on their scientific knowledge and experience.
Is there is relevant also for other regions?
My best wishes in the New Year!
Angela Lozan
posted on 2013-01-17 08:26 UTC by Angela Lozan
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4020]
Greeting to All for the New Year!
I appreciate the suggestion of Jack to evaluate the different statistical tests and define the test protocol in details, such as sampling number and duration etc. That is very important for testing and for comparing the test results.
This reminds me the importance of setting standards and I recall my cigar number theory again... This is a brief echo and I really appreciate.
Wei
posted on 2013-01-17 13:05 UTC by Mr. Wei Wei, China
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4021]
Let me begin by wishing all a Happy New Year and thanking Hans for his work in preparing the guiding question and providing helpful explanations. His efforts are most appreciated.
Having served as a member of the former AHTEG where this issue of “testing” has been discussed for years, I still find this a challenging concept largely because of the highly divergent view of what the guidance should be. The view that I continue to maintain is that guidance should help the user understand how a task in the future might be approached based on the experience of the past. Annex III provides the fundamental objective, application for, principles of, methodology and some points to consider for a risk assessment to be considered as compliant with the Protocol. I believe it was never the intent to tell Parties HOW to conduct a risk assessment. Both the CBD and Protocol are pretty clear that decision-making, including what constitutes an appropriate risk assessment, rests within the scope of national sovereignty where interpretations of precaution, “adequate level of protection” and meeting other international obligations are made. As such, I feel the true utility of the guidance is directly related to how well it helps users understand information in existing guidance produced by functioning regulatory authorities (those to whom dossiers have been submitted and decisions have been made based on a risk assessment). Therefore, testing should first compare the current guidance (the roadmap) with existing guidance and regulations focusing on coherence and correspondence with a functioning regulatory system’s operations. Second, the roadmap should be compared with Annex III of the Protocol to ensure coherence (compliance without extending beyond what is in the Annex).
In the previous online forum and in some recent interventions, some have suggested that the guidance should have elements of novelty and present innovative ideas. I strongly reject this notion especially since the AHTEG never reached consensus on most of these. To build on Geoff’s recent intervention (#4016), it is important to test the guidance for proposals like those he has identified. If these ideas have no basis in existing guidance and if there is no consensus by the AHTEG that they should be retained, they should be removed.
By giving the guidance a thorough review using this approach we still have the opportunity to fulfill the requests of Parties in Decision BS III/11.A to share information and “collaborate, as appropriate, with regard to biosafety” by placing the guidance in the context of how functioning regulatory systems operate in actual cases of risk assessment.
With regard to “tools”, I feel that a questionnaire/survey approach is likely the best way to gather information. As such, I would support the Secretariat drafting a survey based on the input from the online where the emphasis is comparing the roadmap with existing guidance/regulations and (separately) Annex III. The draft survey provided by the Secretariat would then be reviewed and revised by the new AHTEG. In this manner, the AHTEG will take responsibility for the testing, and guide the process such that there is a greater likelihood of consensus within the AHTEG on the analysis of the data.
Some have suggested using actual dossiers to test the guidance. I cannot support this approach at this time until I can see a clear path to obtaining information that will be helpful in improving the guidance. My concern is that this approach will likely lead to questions like “why wasn’t something described in the roadmap explicitly noted in the dossier?” Such questions will only highlight the inflexibility of the current roadmap.
With regard to Hans’ guiding questions, I submit the following.
1. As noted above, the basic “tool” will be a questionnaire. In addition, existing guidance (that has been used in actual cases of risk assessment for decision-making) from OECD and regulatory authorities would be appropriate tools for testing coherence between the AHTEG guidance (roadmap in particular) and the information used utilized by risk assessors used to conduct a risk assessment in actual cases. I further suggest that the Protocol itself be used to assess the coherence and fidelity of the guidance with Article 15 and Annex III. 2. I see Q2 as a second tier of Q1. The test is to be able to point to existing guidance as a precedent for elements in the roadmap. A guiding question in the questionnaire could be, what existing guidance document supports the statements in the Background of the roadmap? 3. The Questionnaire is the basic tool to be prepared by the Secretariat. However, it is up to the AHTEG to modify this tool in a manner that interpretable information can be collected, but it cannot be based on just yes or no answers. This is why the process needs to allow for some iteration in developing the Questionnaire. 4. The questionnaire should collect certain data such as years of experience, number of actual cases of risk assessments performed (or parts thereof). 5. Again, the initiating point is a questionnaire proposed by the Secretariat and reviewed and finalized by the AHTEG.
Thanks and best wishes to all for a healthy and happy 2013.
(edited on 2013-01-17 15:37 UTC by Thomas Nickson)
posted on 2013-01-17 15:35 UTC by Mr. Thomas Nickson, Consultant
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4023]
Dear Hans, Thanks for taking on your shoulders the role of moderator and thanks to the Secretariat for fixing the problem with the BCH messages. Over the years I have started to fare blindly on those BCH messages to be reminded when a debate had started. Reading the various interesting interventions I get the impression that people have in the back of their head clear but very different ideas what we should be testing for. Some interventions refer to testing of ‘appropriateness’ (e.g. consistency with Annex III) while other interventions talk about testing for ‘usefulness’,(e.g. for novice risk assessors). Both objectives seem valid to me and I note that the MOP instructions talks simply about ‘testing, without clarifying for testing for what. Given that different objectives for testing may require different tools, I propose that we first clarify what we are testing for, before we start talking about specifics. Building on the two objectives for testing that have been mentioned, I offer the following distinctions: 1. Testing for appropriateness: a. Consistency with Annex III b. Consistency with the experience and knowledge accumulated over the last decades 2. Testing for usefulness a. For novice risk assessors – can it help them digest a dossier ? b. For experienced risk assessors – does it help harmonisation?
Turning to tools: the charge of MOP is to test ” in actual cases of risk assessment”. “actual cases” can mean a variety of things, ranging from using ‘hypothetical realistic cases’ to ‘existing dossiers’. Here I believe that we should leave the testers freedom to choose, depending on their situation. While testers should have some freedom to decide what kind of cases they use for testing, I think that we need clear guidance about the reporting, so that the results can be compared. Looking forward to the rest of this debate
Piet van der Meer
posted on 2013-01-17 21:43 UTC by Mr. Piet van der Meer, Ghent University, Belgium
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4024]
Didier Breyer made an excellent point “the testing of the Guidance should primarily answer the question whether the Guidance adequately points to the most important and relevant sources of information. The testing should include the identification of missing sources of information in the Biosafety Information Resource Centre and the Scientific Bibliographic Database on Biosafety.”
The guidance is guidance on what could be consider in conducting a risk assessment and provides links to other documents that might be of use to a practitioner. As it is only guidance many parties were reluctant to endorse it at MOP6 because of the concern that such an endorsement would lift it from being “only” for guidance to some mandatory state. As guidance it questionable whether formal testing is needed but Parties who use it should be encouraged to make public how useful it was – almost like a “book review”.
After my last posting on “what is a tool” I gave it some more thought and suggest that a tool might be a simple questionnaire that breaks the guidance up into discreet sections and provides uniform assessment scale, e.g. 1-5 where one is not useful at all and 5 is very useful, and comments as to why the score was chosen for each assessed section – this would be a standardised “book review”. The information and opinions collected in these standardised “book reviews” could then be pooled and analysed to amend the guidance as appropriate.
In my previous posting I noted that what could be seen as good information one day could be considered not good the next day (this was labelled as “retraction” by Jack). My point was that the guidance says the information should be good and scientifically based however for the novice practitioner there is no way you can know this when even peer reviewed journals get it wrong. The guidance document therefore provided guidance without guidance which is effectively useless to the novice practitioner.
I also am surprised by Jack’s next comment that “Perhaps the next AHTEG will decide to evaluate the different statistical tests that are in use and offer advice on the minimum number of samples to be used, how long tests should go etc” Such direction ceases to be guidance and is well beyond the mandate of the MOP6 decision.
Geoff Ridley Environmental Protection Authority, New Zealand
posted on 2013-01-17 22:43 UTC by Dr. Geoff Ridley, Environmental Protection Authority
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4025]
Greetings again to all in the forum. Hans has asked 5 questions which I will reply to in order. 1.What, in your opinion comprises a tool to test the “Guidance on Risk Assessment of Living Modified Organisms” in actual cases of risk assessment? One tool has been suggested by Angela [#4018], to host a regional workshop using the Guidance on an actual case of risk assessment. This seems to be closest to the wording of the decision from the last MOP and will likely produce quality information for improving the Guidance. Her idea to include groups beyond regulators (also mentioned in [#4014]) is in my view excellent, and is consistent with Article 23 of the Protocol. While a survey approach has value, recall that the Guidance has already once been tested by survey. A repeat of this kind of testing allows for greater coverage of countries and regions and thus should be seriously considered, but not to the exclusion of a regional workshop such as Angela suggests. There may already be countries that have been working through the Guidance doing official risk assessments, and another tool would be to conduct a meta-analysis of their experiences that would serve to improve the existing Guidance. To the suggestion that the Parties had in mind that the Guidance should simply conform to the existing practice of some select countries [#4021], many of whom are probably not Parties, I disagree. The underlying rationale for the Guidance was to help Parties use Annex III of the Protocol, not to help Parties understand information in various unspecified ad hoc guidance documents developed by perhaps regulators who have different international obligations. If the Parties had in mind developing Guidance that was subserviently derivative of other guidance documents or subservient to some existing regulatory systems, then they could simply have adopted the other guidance documents. However, none of these other guidance documents have been consensus documents written exclusively with the Protocol in mind, as the Guidance under discussion was. Thus it is no surprise that the AHTEG-produced Guidance might differ from ‘consensus’ documents produced with the objective of different consensus goals and written by people with different international obligations. Moreover, there is no international consensus that I am aware of that any particular ‘functioning regulatory’ system is the ideal model for all countries. Having said this, I do not mean to imply that references, including other guidance, should be excluded from the testing. The suggestion that the testing include how well the Guidance and the underlying Scientific Database [#4019] work together would be a valuable additional component. Who knows, it may also serve as an informal ‘test’ of other (untested) guidance documents! 2.What specific tools for testing would you recommend for each of the different parts of the Guidance? Angela’s suggestion would work for all parts of the Guidance, provided that the example of an actual case included something of relevance to Part II. If this is not convenient or possible, then Part II could be tested further by a survey approach.
The information that we wish to capture is that which improves the Guidance, not the outcome of the risk assessment per se. If different countries/regions differ on what they find, any suggestion that some are wrong or some are right would be inappropriate meddling in their internal affairs. Moreover, it is possible for countries to differ because of case-specific differences (e.g., different receiving environments). So we should be careful to adapt the tool to collecting the information requested by the Parties and not information, such as uniformity in decision making, that was not.
3.How can these tools be modified to capture specific information on each of the three sections of the Guidance? This is a difficult question to answer in the hypothetical. Once we have a tool identified, then specific proposals could be made. However, arguing for the moment that Angela’s offer is accepted, then I would suggest a periodic online forum be used by the participants of that workshop to liaise more broadly with members of this Forum or the next AHTEG as a way to inform about progress and perhaps to discuss issues as they arise. 4.How can the tools be adapted to accommodate the broad sets of experiences from the various target groups? By concentrating on selected countries/regions, the tools will likely be adapted by providing case-specific context in the form of their national laws [#4013] and a description of the likely intended receiving environment.
5.In what form should this tool be presented to the target groups that will carry out the testing? Another hypothetical since we haven’t yet identified the tool. I may have a concrete proposal later when it looks like a likely tool has been identified.
With my best wishes Jack
posted on 2013-01-18 00:18 UTC by Mr. Jack Heinemann, University of Canterbury
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4026]
Dear participants,
I would like to thank Hans Bergmans for moderating this discussion.
It seems there are different views on the image of the Guidance. Before discussing a tool to test the Guidance, I think it would be better to review the past development process of the Guidance in order to recognize to what extent we have reached a common understanding on the image of the Guidance.
Needless to say, in order to promote the safety use of the LMO for human well-being, it is quite important that risk assessment is carried out widely in member countries in a scientifically sound manner. Therefore, I think a practical guidance would be desired, especially for the parties with little experience. In order to be a practical guidance, generally speaking, simpler one would be better. Of course, it should be consistent with Annex III of the Protocol. On the other hand, it should be flexible and should not be prescriptive, so as to be utilized under different regulatory and administrative conditions of the member countries.
As the Guidance has been discussed for a long time, participants, including me, might not know the entire process of discussion. It would be helpful to summarize the process and to know the reason why the Guidance has been developed in a current style (the amount of description, format, or contents etc.).
In reviewing, it should be noted that the former AHTEG reported that “The group generally felt that, at this time, further generic guidance that is applicable to all assessments of risk as outlined in Annex III of the Protocol (e.g., all types of organisms, traits, and all types of hazards), is not a priority”. (p3, UNEP/CBD/BS/COP-MOP/3/INF/1, “REPORT OF THE AD HOC TECHNICAL EXPERT GROUP ON RISK ASSESSMENT”, 6 December 2005) Also, annex of BS-IV/11 describes one of terms of reference of AHTEG, such that “Develop a "roadmap", such as a flowchart, on the necessary steps to conduct a risk assessment in accordance with Annex III to the Protocol and, for each of these steps, provide examples of relevant guidance documents”. According to the above mentioned report and decision, a desired guidance seems not to be a “generic guidance”, but a simple “roadmap”, such as a flowchart with examples. I could not understand why the style of the Guidance, especially “Roadmap”, becomes to be a current one.
Followings are the responses to the guiding questions; 1.What, in your opinion comprises a tool to test the “Guidance on Risk Assessment of Living Modified Organisms” in actual cases of risk assessment?
Previous scientific review and testing used a questionnaire to member countries etc. A questionnaire can be a candidate of a tool. A workshop may be another option. However, considering a cost, it would be limited in countries which can afford to hold meetings. In addition, generally speaking, as a workshop would be more time-consuming, it may be difficult to hold workshops in various regions during a certain period with the same concept.
2.What specific tools for testing would you recommend for each of the different parts of the Guidance? 3.How can these tools be modified to capture specific information on each of the three sections of the Guidance?
Questions in a questionnaire can be modified to be fit to different parts of the Guidance.
4.How can the tools be adapted to accommodate the broad sets of experiences from the various target groups? 5.In what form should this tool be presented to the target groups that will carry out the testing?
Questions in a questionnaire can be modified to be fit to the various target groups.
Although the question 1. refers only to the testing the Guidance in actual cases of risk assessment, I understand the testing would also be conducted in the context of Cartagena Protocol on Biosafety, as it is stated in BS-VI/12 2.(b).
When we consider the parts of the Guidance, it should be reminded that newly developed guidance documents, which are the special guidance on “Risk assessment of LM trees” and “Monitoring of LMOs released into the environment”, have not gone through scientific review and testing process so far. In detail, as for monitoring, “the issue of “general monitoring” was the focus of intensive debate” (COP-MOP6/INF/11, para 54) and it seems the debate has not been completed.
It may be useful to review the discussion process during the intersessional period of 2010-2012. It seems substantial amount of useful opinions expressed in the process were not fully reflected nor taken into account in developing the current Guidance. Such review would make the future testing activity more effective and constructive.
Best regards,
Isao TOJO MAFF, Japan
posted on 2013-01-18 09:18 UTC by ISAO TOJO, Ministry of Agriculture, Forestry and Fisheries
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4027]
I agree that the test should be conducted in a scientifically sound manner. Firstly both of us, especially the AHTEG as a group of experts, shall take a scientifically sound attitude. The scientific attitude does not contain any personal feeling and interests. One of our foreign colleagues told me during MOP6 in India that they are worried that too many guidances would block the growth of biotechnology in their country and they did not like the continuation of AHTEG to develop further guidances. This is of course not an attitude of scientifically sound. Actually, as far as I know only GM crops from Monsanto are largely planted in their country and I am not aware that they have their own commercialized ones.
That is somehow away from our topics here. However, I strongly feel the importance of scientific attitude and propose that the test, including tools and results, should be reviewed by scientists, especially with experiences of biosafety, ecology and environment. This way would be able to assure the scientific manner of test and to provide useful evidence for regulation, management and decision-making. Otherwise, the test and its result could become nothing.
posted on 2013-01-18 15:35 UTC by Mr. Wei Wei, China
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4028]
I have reviewed the contributions made to the forum to date and would like to offer the following comments starting with some general comments around the questions posed by Hans (which I found useful to help focus the discussion). Hans states that the “objective of this round of discussions is to brainstorm on the conceptual development of tools that can be used to conduct the testing”. This objective is rather broad and I remain puzzled as to how we might ‘develop’ tools. Perhaps this is a semantic issue.
In response to the guiding questions - 1.What, in your opinion comprises a tool to test the “Guidance on Risk Assessment of Living Modified Organisms” in actual cases of risk assessment? In this context, please present your concept of such tools in general terms for example: what kind of materials and methods do you need to be able to do a testing on “actual cases of risk assessment”? JG Response. Despite that additional guidance I remain puzzled by this question. I note that some of the participants in the discussion have suggested tools such as meetings and surveys, while others have suggested working through existing risk assessments essentially on a line by line basis with the guidance. I don’t believe that a meeting would be of any use since we would need to have ‘tools’ to work with before the meeting (hence a tautology) and as others have noted we have already attempted the survey approach. If we were to use a survey then we would need to be much clearer what information was being sought and how it would assist in evaluating the guidance. So in summary I align with those who have suggested that the best ‘testing’ is to review who has applied the guidance (at the coalface) and what issues they have had with it. If nobody has used it then we have to question its value.
2.What specific tools for testing would you recommend for each of the different parts of the Guidance? JG No response – I don’t see how the guidance (focussing on the Roadmap) can be tested in this piecemeal fashion. Surely the tool will be the same?
3.How can these tools be modified to capture specific information on each of the three sections of the Guidance? JG Once again I have difficulty understanding the concept of ‘tools’ in this context. If tools were to include further surveys and/or meetings then the specific questions to be addressed might differ for the different sections of the guidance.
4.How can the tools be adapted to accommodate the broad sets of experiences from the various target groups? JG This depends on what we mean by tools. Yes the range of experience of different groups will be very wide (what is meant by ‘target groups’ – have we actually determined who the audience for the guidance is yet?)
5.In what form should this tool be presented to the target groups that will carry out the testing? JG This also raises the question of what is meant by ‘target groups’. Until we decide on tools we can’t even think about the form of presentation.
--- Dider Breyer suggests that “One should not forget that one of the main objective of the Guidance (and in my opinion it was the primary objective) is to provide a structured framework facilitating access to other documents useful for the risk assessment of LMOs ….. In that sense, the Guidance is not a self-sufficient document to support a risk assessment. There are plenty of Guidances, scientific papers, risk assessment reports, books... where more detailed information on generic or specific aspects of the risk assessment (including on how to conduct a risk assessment) can be found. “ I agree very much with this comment. The Roadmap is not a ‘cookbook’ and needs interpretation according to the particular circumstances in which it is being applied.
Jack Heineman (supported By Dr Wei Wei) talks about evaluating different statistical tests and defining and define test protocols. For what? The concept of evaluating “different statistical tests that are in use and offer advice on the minimum number of samples to be used, how long tests should go etc” is nonsensical, and as Geoff Ridley points out is well beyond guidance and the mandate of the MOP6 decision.
Jack supports Angela’s suggestion of a regional workshop using the Guidance on an actual case of risk assessment. As I understand it this type of testing has already been conducted, and I remain of the view that further meetings and workshops are unlikely to provide ‘testing’ – the best testing will be use of the Guidance in ‘real’ situations.
Dr Wei Wei discusses the scientific attitude and states that “The scientific attitude does not contain any personal feeling and interests.”. This is rather optimistic. Over more than 20 years of teaching and undertaking risk assessments in various contexts once of the really important matters that I always stress is that (complete) scientific objectivity is a goal that is essentially unachievable. There are subjective aspects to choosing the models for analysis, in collecting data and in analysing data. What we must do is to clearly state our assumptions and also be very clear about how we have addressed all the different types of uncertainty inherent in any risk assessment.
I strongly support Isao Tojo’s request for a summary of the process of developing the Guidance (recognising that there will be differences of opinion).
Tom Nickson states that “The view that I continue to maintain is that guidance should help the user understand how a task in the future might be approached based on the experience of the past. Annex III provides the fundamental objective, application for, principles of, methodology and some points to consider for a risk assessment to be considered as compliant with the Protocol. I believe it was never the intent to tell Parties HOW to conduct a risk assessment. Both the CBD and Protocol are pretty clear that decision-making, including what constitutes an appropriate risk assessment, rests within the scope of national sovereignty where interpretations of precaution, “adequate level of protection” and meeting other international obligations are made. As such, I feel the true utility of the guidance is directly related to how well it helps users understand information in existing guidance produced by functioning regulatory authorities (those to whom dossiers have been submitted and decisions have been made based on a risk assessment). Therefore, testing should first compare the current guidance (the roadmap) with existing guidance and regulations focusing on coherence and correspondence with a functioning regulatory system’s operations. Second, the roadmap should be compared with Annex III of the Protocol to ensure coherence (compliance without extending beyond what is in the Annex).”
I agree very strongly with Tom’s view as expressed here.
Geoff Ridley makes an interesting point where he says that “ …… As it [the Roadmap] is only guidance many parties were reluctant to endorse it at MOP6 because of the concern that such an endorsement would lift it from being “only” for guidance to some mandatory state.” This is really important. The Roadmap should not be seen as something that could be considered to be mandatory.
Geoff’s suggestion that the Guidance might be evaluated by means of a simple questionnaire is practical. However, what will it really tell us? My fear would be that it would open the door to a full review of the Roadmap, whereas, what I think we really want to know is how useful it is.
In conclusion, I have found the discussion to date to be at too high a level to be of much value since we actually have to think about how we are going to do something. I remain confused as to what ‘tools’ might look like, since I am not sure that we are yet clear about what we are testing for – ie scientific integrity, usability, flexibility etc…
posted on 2013-01-18 20:37 UTC by Janet Gough, Environmental Protection Authority
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4029]
Dear participants, I also want to thank Hans Bergman for his very useful introduction to our discussions. Now is clear to me that the Secretariat will develop appropriate tools to test the “Guidance on Risk Assessment of Living Modified” with the inputs of the Open-Ended Online Forum. In accordance with the proposed objective of this round of discussions, “ to brainstorm on the conceptual development of tools that can be used to conduct the testing”, I will start with some of the questions that Hans posted, referring to other contributions of participants. 1.What, in your opinion comprises a tool to test the “Guidance on Risk Assessment of Living Modified Organisms” in actual cases of risk assessment? Here I think that we need to agree first what is the kind of testing from which the Guidance will benefit the most. And in my point of view, a key guiding concept that the COP-MOP decided is in Paragraph 1(b): “The Guidance will be tested nationally and regionally for further improvement in actual cases of risk assessment and in the context of the Cartagena Protocol on Biosafety”. When we proposed this "actual cases" at the MOP we were thinking in something very similar to what Geoff Ridley referred in his posting, more in the sense to test “relevance and usefulness of each part of the guidance as we try to apply it in a risk assessment”. So each Party –since the decision says in the context of the Cartagena Protocol–, who has an actual case, this meaning a current application (with all the information that this usually contains, relevant for the particular environment where the GMO will be release), could conduct a risk assessment using the different parts of the guidance and report the results of their testing and their suggestions for improvement. To do this reporting, I agree with Piet that there is the need for a particular tool to report the results of this testing in a useful way. I think that this approach for testing will be the most useful to find ways to improve the Guidance.
Now, from this interpretation we would leave out of the testing non Parties (which may have a lot of actual cases and experience but not in the Context of the Protocol), and Parties that do not have an actual case and hence no experience. So we may want to find a way to effectively include other relevant stakeholders, but I also think that to do so effectively, we need to adapt the tools we use and have the objectives of the testing clearly identified.
For Parties that do not have an actual case we could make use of the mechanism that Boet proposed in her posting to make available an “actual application that is probably meaningful to all testers”, but here we have to keep in mind other considerations for example: if we use only one case some parts of the guidance will not be tested, is there a need for criteria to select the appropriate cases?, and if we decide for only one dossier, will it have the relevant information in relation to the likely receiving environment to actually apply the Guidance in the risk assessment?, etc. Furthermore this will be somehow a theoretical testing (more academic one) and hence maybe not in the real context of the Cartagena Protocol. So the results of this testing will have limitations in its usefulness for improving the Guidance.
For non Parties, we need to look at other parts of the Decision BS-VI/12 paragraph 3. “Also encourages Parties, other Governments and relevant organizations, through their risk assessors and other experts who are actively involved in risk assessment, to test the Guidance in actual cases of risk assessment and share their experiences…[…]” So the testing process will include also Non Parties and relevant organizations, that may not use the Guidance or that may want to use it for different proposes. Because of what I mentioned before, and since we could prefer that the Guidance serves also to other governments and even public researchers or small companies to guide their applications, I think that we need to be very flexible with how each different actor does their testing and be conscious that adapted tools will be needed to gather the results obtained from the testing, to put them in the context of that particular context. 2.What specific tools for testing would you recommend for each of the different parts of the Guidance? Here I see different kind of tools for different proposes, and I want to clarify that I am using the word “tool” with a very broad connotation. a. The tools for the actual testing.- these need to be very flexible according to whom are performing the testing and in which context. For example if the testers are evaluators using an actual case (or different actual applications) in a regulatory process, they could use the approach Boet mentioned of doing their risks assessment and compare the process they usually do to the different components of the Roadmap (part I of the guidance) and evaluate if the Background documents associated are useful as presented. This kind of testing will be useful to evaluate the Guidance, particularly the Roadmap, and inform if it is compatible with the way risk assessment is normally performed in established legislation systems, if it lacks some important guiding issue or if it goes beyond what is needed. Here I think that we do not need to limit to one type of application (or actual case) but different types. This will help also to test if the Guidance is useful for an application for large scale release, and if it is useful for an experimental small scale release. This testing does not need a workshop but just the willing of the evaluators to consider in their day to day activity the roadmap, and respond the questionnaire with their results. Depending on the actual cases (applications) used by the testers they could tests others parts or the guidance (but I think there will be few possibilities to test the part II for Lm Mosquitos or abiotic plants). Finally, I think that this kind of testing will be very useful to compare the Roadmap with Annex III of the Protocol.
If the testers do not have an application or experience, maybe the testing could be done as part of a workshop, here it should be clear the objective of the testing will be different; it may have a “capacity building component”. The results of this testing, as I mentioned will be limited, but maybe informative in relation to the usefulness of the Guidance “to digest a dossier” for less experienced evaluators, as Piet posted it. If the testers are other stakeholders the objective and the way of testing could be different and the tools could change. The results of different testers can be complementary to identify which parts of the guidance should be improved to tackle different stakeholder’s needs.
b. The tools for gathering and analyzing the results of the testing process.- here we could use a survey or questionnaire that will include different elements or types of questions: 1) questions to contextualize how the testing was performed and by whom. Differentiating for example: if an actual case was used, if the case used was for small or large scale release, if the testers are evaluators in an establish regulatory systems or academics not bound by formal rules. Also it would be important to know if the evaluators have done risks assessment to applications before, if the testers are NGOs, or non Parties with or without experience (for this part some of the questions proposed by Hiroshi Yoshikura in his posting are useful). 2) Elements to gather the results of the testing process, here the type of questions that Boet proposed maybe helpful, since those focus on the usefulness of the Guidance, in addition questions that inform on the congruence with Annex III will be important. For this part I agree with Boet Glandorf posting saying that “The answers in the test report should not be the result of the risk assessment, but should tell us any problems of any kind encountered in the process described in the guidance” particularly the Roadmap. 3) Elements or questions to evaluate the usefulness of the Background materials and the Glossary as other components of the Guidance, as Didier mention there could be some questions to test whether the Guidance adequately points to relevant sources of information. The questions could be developed with optional answers or open spaces, and when needed, some could include a section for suggesting improvements.
c. Tools for testing different parts of the guidance.- these in part will depend on the actual case, used for the testing. If the actual cases are the ones that the evaluators are receiving, maybe some parts of the Guidance will not be tested (ie LM Mosquitos). Maybe a way to deal with this is to to involve testers having different actual cases. A key question for testing the different parts might be if the different parts of the Guidance are complementary with the Roamap.
With this I answered parts of the other questions posted.
To conclude, for now, I also have to agree with Geoff Ridley, that there are parts of the Guidance where you do not require a tool to find out that are not useful, but I have the hope that the testing process will clearly highlight these, and the report of the testing will provide clear suggestions to fix these parts.
posted on 2013-01-19 04:09 UTC by Ms. Sol Ortiz García, Mexico
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4030]
Dear forum participants,
Thank you Hans for moderating, it is lovely to be involved in such a robust forum discussion.
Firstly, I think a regional Workshop-based test to be greatly more explorative, inclusive and informative than a Questionnaire-based approach. In contrasting the two, Workshop [Sarah #4041, Angela #4018] and Book-like Review [Geoff #4024], it is hard for me to see these as incompatible ideas, in fact a logical output of a Workshop-based analysis of an actual application may be something akin to Geoff's imagining of a Book-like Review. However, the two processes of synthesising a review, Workshop and Questionnaire, will in my opinion lead to different types of review. Workshops have the benefit of localising a range of stakeholders and allowing them to communicate dynamically. Questionnaires on the other hand are performed in isolation and are an inherently prescriptive process. Workshops are inherently more explorative and are therefore a better test of the Guidance. Furthermore, in response to Sol [#4029], it is without base and arbitrary to imply parties that hold Workshops have an inherent "capacity building component" and that their contribution to the testing should somehow be treated differently - "If the testers do not have an application or experience, maybe the testing could be done as part of a workshop, here it should be clear the objective of the testing will be different; it may have a 'capacity building component'".
Secondly, I would like to seek clarification as to whether a regional workshop based assessment of an actual application has been used to test the Guidance before (as stated by Janet #4028)? To my knowledge, a survey was performed but never a workshop.
Thirdly, I would also like to write briefly on the topic of inclusivity of testing the Guidance. I'm particularly wary of criteria that can be used to discriminate against the opinions of certain testing participants for any reason other the merit of their argument. It would be supremely unfortunate if the testing of the Guidance is not inclusive and the opinions of a few gain normative power simply by creating a discriminator structure. Specifically I refer to: Tom's point 4 [#4021] - "The questionnaire should collect certain data such as years of experience, number of actual cases of risk assessments performed (or parts thereof)"; Janet's use of "who … at the coalface" [#4028] - "the best ‘testing’ is to review who has applied the guidance (at the coalface) and what issues they have had with it"; and Sol [#4029] - "it would be important to know if the evaluators have done risks assessment to applications before, if the testers are NGOs, or non Parties with or without experience".
Lastly, the issue arises as to what is good Guidance [Geoff #4016] and what is prescriptive. As I understood Geoff's comment that "'Appropriate statistical methods should be used where appropriate'. If I am in a country that has little experience of risk assessment how do I know do I know what is appropriate?" he was making the argument that the Guidance wasn't providing useful guidance. To a suggestion that perhaps the next AHTEG could take that up if it were a problem [Jack #4017] has come a response that providing that sort of guidance would be too prescriptive [Geoff #4024, Janet #4028]. This creates a situation where the guidance could never be satisfactory because it is either not useful or too prescriptive! Instead I agree with Wei Wei's [#4029] point that we should have an open mind and test the Guidance rather than argue that it could never succeed.
Best wishes,
Leighton Turner University of Canterbury
posted on 2013-01-19 06:05 UTC by Mr. Leighton Turner
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4031]
I agree with others that the choice of the word “tools” is unclear, so it will be up to the online participants to clarify what this might mean in the context of the mandate from the Parties to test the roadmap.
I agree that the testing should “question the relevance and usefulness of each part of the guidance as we try to apply it in a risk assessment”, as Geoff Ridley stated early in the discussions.
I also agree strongly that the approach to testing needs to be considered at the same time as how the results of the testing will be reported and analyzed. This is important to gather information about the experience level of the tester(s), the types of LMOs considered, the nature of the environmental release (confined versus unconfined), and the parts of the roadmap guidance considered. As others have suggested, it will be useful to specifically ask testers if they use guidance documents other than those being tested here (e.g., there are other guidance documents being used by governments and available online).
• A number of testing approaches should be offered to those who will be doing the testing. The testers can choose among a range of possibilities as long as they report their choice and methodology. • The reporting of testing results (and the analysis of the results) needs to be considered when devising how the testing is to be done.
As I understand it, this online forum discussion should propose ways to do such testing. Here are some of my thoughts at this stage.
Testing the roadmap guidance: 1. Evaluate consistency with the Protocol and Annex III, as well as the mandate for the roadmap to cover RA/RM (not decision-making). For testers who can, additional comparisons to their national risk assessment guidance documents might be informative.
2. Evaluate the relevance and usefulness of each part of the guidance as the RA is done.
3. Use real-world information and circumstances whenever possible. Dossiers are in the public domain online at USDA-APHIS for requests for unconfined environmental releases.
4. Test a request for at least one confined environmental release. These types of releases are mentioned only briefly in the roadmap, but the type and quantity of information for such releases is often much different than requests for unconfined releases.
5. Test a request for at least one unconfined environmental release. Ideally, a tester would compare the appropriateness of the roadmap guidance for the same LMO – for a risk assessment for a confined release, then for a risk assessment for an unconfined release.
6. Choosing the LMOs to test. In order to be closer to “actual cases” choose LMOs that have already been evaluated in at least one country (suitable example dossiers could be constructed from information in the BCH) a. LM plants (the choices below are to offer a range of phenotype complexity, world-wide experience, and trees as well as herbaceous plants) i. Male sterile canola (a relatively “easy” case to test) ii. BT-cotton (a more complex case to test) iii. Papaya engineered for resistance to papaya ringspot virus b. LM virus – recombinant rabies vaccine (vaccinia) used for control of rabies in Europe, North America, elsewhere (this actual case has been evaluated and used in the environment over the past 20 years and would be an interesting case to evaluate in testing the roadmap guidance)
7. Testing by which countries (each case should be tested by each of the following to give a sense of usefulness of the roadmap): a. Those that have regulatory systems in place, and have years of experience in conducting RA used for making decisions by their governments b. Those that have regulatory systems in place, but have limited or experience in conducting RA used for making decisions by their governments c. Those that do not yet have regulatory systems in place
8. Testing by organizations and institutions other than governments a. Research institutes (these might be national or international, i.e., CGIAR centers) b. University researchers (these might be involved in agricultural research, RA, public health, etc).
This is what I have been thinking about so far. The online discussion has been very useful for me to try to devise a constructive and practical way to test the guidance consistent with the mandate from the Parties. My thanks to all for sharing their ideas, and I thank Hans especially for moderating the discussion and offering some structured ways to approach the task of testing.
posted on 2013-01-19 14:08 UTC by David Heron, United States of America
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4032]
Dear participants, We are about halfway this discussion, and I am very grateful to all who have already given their time and efforts to provide answers and comments to the leading questions. From the reactions it is clear that there are various and divergent views. Based on the arguments that follow, I propose that the basic questions that we address in the remainder of the Forum will include: 1) What (type of) questions are needed to focus the discussions of the testers 2) What guidance could the testers need to choose the actual cases of risk assessment; should the Secretariat provide actual cases of risk assessment that could be used, or, in addition, should the Secretariat show the way where actual cases of risk assessment can be found. I urge you to be as concrete as possible in your suggestions. The arguments to focus on these questions are the following: In BS-VI/12 (starting on page 88 in http://www.cbd.int/doc/meetings/bs/mop-06/official/mop-06-18-en.pdf) the Executive Secretary is charged to develop appropriate tools to structure and focus the testing of the Guidance; In our mandate we are charged to assist the Secretariat in their task to develop tools to structure and focus the process of testing the guidance. The testing will basically be a process of discussion to find out whether improvements are needed so that the Guidance will better assist risk assessors in actual cases of risk assessment. ‘Actual’ should be taken literally: ‘existing in act and not merely potential’ (Merriam-Webster Dictionary). Therefore the testing needs to take into account one, or probably more than one, ‘actual’ cases of risk assessment. In order to keep the testing focused, it would be advisable to provide a number of questions to the testers. This will help to make the results of different testers comparable. This is very important so that a report can be made of the test results, in a transparent manner, that can be used by the Secretariat to analyze, also in a transparent manner, the feedback and provide a report on possible improvements to the Guidance. Of course, the discussion will still take into account the 5 guiding questions that have been posed in my first posting at the top of this discussion thread. I would ask you to furthermore take into consideration the following comments, that I base on a number of the comments made so far in the Forum. The method for the testing: - From the comments it is clear that the most advocated method for testing the Roadmap is some type of workshop, preferably allowing direct, ideally face-to-face, discussions of risk assessors. - There is some controversy as to whether performing a risk assessment should be part of the testing. In any case the testing should provide answers whether the Guidance assists in performing a risk assessment. Therefore, a risk assessment will in some way figure in the testing. To be useful, this risk assessment will be (or have been) performed according to the method of risk assessment described in the section ‘Conducting the Risk Assessment’ in the Roadmap and in other parts of the Guidance. - It has been argued that the testers need flexibility in the way the testing is performed. The limiting consideration would however be that the method used should lead to relevant answers to the questions asked. - There will be more and less experienced testers. Less experienced testers may need materials provided for them as they have little actual cases available themselves, but there should be flexibility so that experienced testers can use their experience and profit from it. The materials for the testing: The Questionnaire - The results of the testing will be used to improve the Guidance where necessary. Therefore, the questionnaire should ask for concrete and motivated answers. The type of standardized questions where usefulness is scored on a certain scale would therefore not be helpful. - In a questionnaire different types of questions will be posed, e.g.: 1) Elements to gather the results of the testing process. The questions should aim to find out if any, and what kind of problems were encountered in the risk assessment process. From the answers it should be clear where and when the Guidance was not able to assist the testers. The questions should also ask for suggestions for improvements that help in any problems that were encountered. 2) Questions to contextualize how the testing was performed and by whom, including information on the level of experience that the testers feel they have. The actual cases of risk assessment: - The materials that can be offered to the testers will primarily be the ‘actual cases of risk assessment’. - But, for testing the more general chapters in the Guidance, such as overarching issues, an actual case will probably not provide information. The necessary additional knowledge and materials will probably be specific for the national or regional legislation under which the testers are working. - There should be flexibility so testers can use their own materials if they wish to do so, like actual cases of risk assessment that they have used; for transparency reasons it would be best if these cases are publicly available. - In order to reflect the full range of cases treated in the Guidance, it is advisable that the actual cases: 1) range from small scale to unrestricted use and placing on the market, and testers should be encouraged to take several types of actual cases into account; 2) cover all types of LMOs. Testers should be encouraged to also use an actual case that is of a less common type, such as the types treated in Part II of the Guidance, on specific LMOs and traits. Some concrete examples of actual cases have already been proposed in the discussion. I am looking forward to your comments in the remainder of this discussion. Hans Bergmans
posted on 2013-01-20 12:53 UTC by Mr. Hans Bergmans, PRRI
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4040]
I wish to thank Hans for making a generalization and structuring of what have been proposed.
In my view, the multiply testing methods might be applied. A flexibility in this respect is appreciate. The testing may be performed at national, regional or institutional levels to combine various experience and forms of organization. It might involve an actual RA of LMOs or traits, as well as discussions of RA dossiers already performed and available from BCH database or using national/regional cases. The face to face meetings at national level and regional level to discuss a performed dossier (s) are preferable. A questionnaire as a tool will be helpful to produce a comparable analysis of testing results from the variability of methods and type of LMOs/traits used during the testings.
1) What (type of) questions are needed to focus the discussions of the testers? The questions should avoid requesting simple answers “yes” or “no” . The questions should be as general as specific. The specific questions should be relevant to the parts of the Guidance on a paragraph by paragraph base. The questions should ask: - if the part/paragraph of the Guidance is relevant in relation to the case tested? - if the technical language is understandable? Proposals and suggestions. - if the method applied to RA is relevant? - what kind of requested questions/information is missing in the Guidance? - what are the national/regional/institutional experiences that can be considered to be involved in the Guidance additionally? - what are the reference materials available to be additionally added to the Guidance? - General suggestions to improve the Guidance. - Specific suggestions to improve the Guidance.
2) What guidance could the testers need to choose the actual cases of risk assessment; should the Secretariat provide actual cases of risk assessment that could be used, or, in addition, should the Secretariat show the way where actual cases of risk assessment can be found. - Dossiers of actual cases of RA may be provided by the Secretariat as well as Secretariat should advice of how the dossiers can be found via BCH.
Returning to the previous discussions I would say, I respect very much the evaluators/regulators who have an experience in RA of 20 or more years, their extremely valorous scientific and applicative skills and knowledge. At the same time I would invite not underestimate other professionals in multidisciplinary fields who may have limited practical experience in this specific activity. I would not be so categoric to neglect theirs capacities to contribute to the usefulness and quality of Guidance. A multidisciplinary and multistakeholders approach, based on scientific and countries experiences for testing would be a right approach.
I would not see the scope of testing of guidance to approach to the existent guidelines or knowledge and experience in RA, expressed by some of colleagues. It is a completely wrong thought. I would be careful not to make the Guidance a “pocket guidance”, comfortable for those regulators who already have developed their internal guidelines. I believe the Guideline should be in that quality as to be useful for all parties and for the past, present, and also for the future RAs. The Guidance comprises an overall approach on scientific base, involving as much as possible aspects of risks and cases, and should not be simplified or reduced as just for usefulness and comfort of a number of regulators. The other relevant existent guidelines may be taken into consideration in that "volume" that not prejudice the scope of the Guidance to be in conformity with the Cartagena protocol. In this context I appreciate the statement of Wei Wei (!) to be consistent with the strong scientific values and to be careful of possible “interests”.
Angela
(edited on 2013-01-21 10:15 UTC by Angela Lozan)
posted on 2013-01-21 10:03 UTC by Angela Lozan
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4041]
Dear all
Thank you for the interesting discussions and thanks also to Hans for the useful summary.
I would firstly like to highlight that the Guidance has been previously tested (by Parties, other Governments and relevant organizations) and revised accordingly. The point of this round of testing, at national and regional levels, is for ‘further improvement’ in actual cases of risk assessment and in the context of the Cartagena Protocol on Biosafety. So we are not starting from scratch and can usefully build on what has already been done. For example, I am aware that in my own country, Malaysia, our Genetic Modification Advisory Committee has used the Guidance to guide its work, i.e. in actual cases of risk assessment. There may be other Parties that are also already using the Guidance in similar ways, so collating this information and analyzing the results would be one useful input. The idea of a regional workshop as proposed by Moldova is an attractive one, and its offer to host such a workshop a very good opportunity to take the testing process further. With regard to the questions that are needed to focus the discussions of the testers, I feel it would be important to focus on the process – whether the Guidance was able to assist in the risk assessment, how it could be more effective in places, whether it raised issues that the testers may have missed otherwise, whether it helped point to useful resources (e.g. through the links to the background documents), whether it helped organize the risk assessment process appropriately and what challenges remain. It would seem to me also that all parts of the Guidance should be used and hence tested as a package and therefore the tool(s) need not be different for each part of the Guidance.
I believe that the overall purpose of the testing process is to make a fair, comprehensive and holistic evaluation of the Guidance. In this respect, I do not agree with the proposal of some that the purpose of the testing is to “question the relevance and usefulness of each part of the guidance as we try to apply it in a risk assessment”.
Kind regards Lim Li Ching Third World Network
posted on 2013-01-21 13:35 UTC by Ms. Li Ching Lim, Third World Network
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4042]
Dear Hans Thanks for formulating the questions that can help us to identify the appropriate tools to testing Guidance. Thanks to colleagues also for giving comments and interesting ideas. I have the following few comments:
1) The Road Map is to intended to make Annex 111 more comprehensive when conducting risk assessment. Therefore the target audience should be national assessors, developers and regulators with experience. However there should be regional workshops for capacity building on the road map 2) On the issue of aligning the Guidance and Manual the two documents should remain independent, however the manual should be restructured in such a way that its content is used as an explanatory guide to the Guidance. Also the Guidance should be flexible to allow for domestication for national authorities to fashion according to national needs, 3) On the issue of either integrating all of the guidance or just the roadmap section, we should focus on aligning the content of the Roadmap and the manual.
Rufus Ebegba, National Biosafety Office, Federal Ministry of Environment, Abuja-Nigeria
posted on 2013-01-21 13:51 UTC by Dr. Rufus Eseoghene Ebegba, Nigeria
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4047]
Thanks to all participants for brainstorming. The discussion of tolls for the Guidance testing has also led me to believe about the importance of a complex approach. And I support opinion of Dr. Angela Lozan of multiplying methods of the Guidance testing. As I see it at this stage the main tools have to be the real testing of risk assessment and placing the results of testers into the Database (especially for Part II), face to face experts meetings with attraction the experienced risk assessors and the specialists that are less experienced but skilled in different branches of Science to determine the degree and the scope of such methods. I also agree with the point of view that the Guidance (especially the Second and the Third Parts of it) has the special value for the less experienced risk assessors and researches. In this aspect questionnaire also is a useful approach for the points of view the experienced experts and less experienced scientists in risk assessment (to whom this Guidance at least now is dedicated) opinions comparison and making it relevant for the last in the real RA cases. I welcome and support the proposal of Dr. Angela Lozan to initiate Regional and Sub-Regional meetings. In my view, this approach needs good organization and advanced risk assessors attraction from the countries of Region and Sub-Region which have wide experience in developing the same modified organisms and their assessment. It can be useful from the point of view to attract the discussion of more wide range of the skilled experts in various branches of Science.
I want to thank Hans Bergmans for moderating and would like to comment his last questions.
1. What (type of) questions are needed to focus the discussions of the testers?
To the questions offered above as the representative of the less experienced in RA country I want to add the question which is dedicated to the experienced and less experienced experts to reveal needs of the last: - How do you think whether the structure of the Second Part of the Guidance facilitates the real cases of RA carrying out? Whether it is necessary to add real sequence of tests and solutions to the different steps of the Part II? What your offers on its structuring taking into account using as the practical guidance?
2. What guidance could the testers need to choose the actual cases of risk assessment; should the Secretariat provide actual cases of risk assessment that could be used, or, in addition, should the Secretariat show the way where actual cases of risk assessment can be found?
From my point of view the Second and the Third Parts of the Guidance are rather vague to evaluate the actual cases of risks and gives too much scope for information assessment in the case of less experienced in risk assessment countries like Belarus. In this context I believe the Guidance would better assist less experienced testers in actual cases of RA the individual parts of the risk assessment (Part II, III) need to be done clear. To simplify risk assessment for specific types of LMOs and traits one of the decision is addition the sequence of tests and solutions which could be expressed for example as the flowchart. Each unit of this scheme may contain the direct reference to the actual methods of analysis (actual cases of risk assessment) located in the Background materials linked to the Guidance on Risk Assessment of Living Modified Organisms. Flowcharts should be pointed as «Example» to show their flexibility and conformity with the Annex III of the Protocol (case-by-case basis of risk assessment).
Regards, Galina Mozgova, The National Coordination Biosafety Centre of the Republic of Belarus
posted on 2013-01-22 08:06 UTC by Ms. Galina Mozgova, Belarus
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4054]
I would like to thank Hans for helping us to focus on the more specific 'hows' of the proposed testing.
However I would like to reiterate that I don't believe that a workshop, or probably more logically, several workshops would be a good use of resources (as Geoff Ridley has pointed out there would be financial constraints on a number of countries/persons that might result in less effective participation than would be desired.
If, however, this approach is adopted, there are still a number of basic questions that need to be resolved before we get to the 'nitty gritty' of the testing.
What would the (specific) purpose of the workshop/workshops be? Hans has said that "The testing will basically be a process of discussion to find out whether improvements are needed so that the Guidance will better assist risk assessors in actual cases of risk assessment." This seems to me to be too imprecise to be useful - and surely we need to focus not on just 'risk assessment' but 'risk assessment as conducted in the application of Annex III'. There is plenty of good guidance on risk assessment out there, but we are looking at a particular application.
What would be 'tested'? ie just the Roadmap, or the Roadmap plus the additional guidance documents? I would make the point that to my mind most of the additional guidance documents are case studies - how do you 'test' a case study?
Who would attend? Attendance should be representative, but I am unclear as to how this would be achieved.
What would the output from a workshop/workshops be? And how would this be reported and used? Hans states that " The results of the testing will be used to improve the Guidance where necessary." - but is this within the mandate?
regards janet
posted on 2013-01-22 18:40 UTC by Janet Gough, Environmental Protection Authority
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4055]
Dear Hans and all,
I would like to thank Hans for modarating our discussion.
As it has already pointed out by Geoff Ridley [#4045], because of many constraints, such as cost, time, etc., I think workshop would not be a realistic tool for the testing. In this regard, I agree with the comments from Janet Gough [#4054], especially, “Who would attend? Attendance should be representative, but I am unclear as to how this would be achieved.”
Concerning the focusing the testing, i.e. Hans’s question of “1)What questions are needed to focus the discussion of the testers”, I would like to follow the comments from Paul Keese [#4046]; i.e. “The answer seems to lie in 5(b) of BS-VI/12 “feedback provided as a result of testing on the practicality, usefulness and utility of the Guidance, (i) with respect to consistency with the Cartagena Protocol on Biosafety; and (ii) taking into account past and present experiences with living modified organisms”.”
In response to the Hans’s request, i.e. “to be as concrete as possible”, with respect to consistency with the Cartagena Protocol on Biosafety, one of the important questions should be placed on whether the Guidance is exactly for the risk assessment on the conservation and sustainable use of biological diversity, as prescribed in the Annex III of the Protocol. It must not be considered for risk assessment on the environment as a whole.
It is stated that “The use of the wording “biological diversity” in the context of Article 1 (of the Protocol) indicates a fairly narrow definition of the object of protection. By contrast, a number of existing national laws extend the scope of protection to the environment as a whole, including not only biological diversity but also other parts of the environment such as air, water and soil.” (IUCN ,“Explanatory Guide to the Cartagena Protocol on Biosafety”, para168) It is also found that “Damage to biological diversity does not seem to be synonymous with damage to the environment.” (CBD, “Biosafety Technical Series 03”, p21)
However, it is described in the first sentence of the “Background” in the “Roadmap” that “This “Roadmap” provides guidance on assessing environmental risks of living modified organisms”. It seems to indicate the Guidance would be for the risk assessment on the environment as a whole. Originating from such misunderstanding, it seems that there are some parts which must be beyond the matter to be handled in the Protocol. In order to avoid confusion, those parts had better be deleted from the documents, or reworded just examples of national registrations where the risk assessment on the environment as a whole is required.
Therefore, the questions concerning the consistency with the Protocol should include whether each part of the Guidance is exactly for the risk assessment on the conservation and sustainable use of biological diversity.
Sincerely
Isao Tojo Ministry of Agriculture, Forestry and Fisheries, Japan
posted on 2013-01-23 02:38 UTC by ISAO TOJO, Ministry of Agriculture, Forestry and Fisheries
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4058]
First of all, thank you Hans for moderating this session as well as providing guidance to the open-ended online forum.
The first question that Hans posted as to what comprises a tool to test the “Guidance on Risk Assessment of Living Modified Organisms”. While various suggestions have been given, I think one of the more pragmatic ways is to test it through the National Competent Authorities and collect feedback from there. Bearing in mind the guidance should help and support risk assessment and risk assessors in the framework of the Cartagena Protocol and its Annex III; about the use of workshop, I am afraid this will be limited to resourceful countries and I have personally conducted workshops on RA, the outcome is not quite up to expectation. Also, it is quite demanding financially to conduct a more thorough workshop, not to mention regional meeting, unless special provision from any funding is obtained. Arguable, the more straightforward tool would be questionnaires (which must be well structured according to the framework of the Cartagena Protocol and Annex III therein) and as Beatrix suggested, a matrix approach is a good idea. As every country may have slight differences in achieving the objectives of RA and RM, these tools should logically be modified according to the domestic legislation. Finally, as to what form that this tool be presented to the target groups that will carry out the testing, my personal opinion is that it should be given as a form of printed materials as well as made available online for easy access and reference.
Thank you and best wishes to all.
Dr Kok Gan CHAN
Senior Lecturer University of Malaya Malaysia
posted on 2013-01-23 11:48 UTC by Professor Dr Kok Gan Chan, Malaysia
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4059]
Hans,
Thanks for bringing us back to the actual assignment given by MOP, i.e. to assist the Secretariat in developing appropriate tools to structure and focus the testing of the Guidance.
While I have enjoyed reading the general observations about testing, I believe that we are reaching the point where we have to move to concrete proposals, because at one point the Secretariat will have to distribute a document with instructions and guidance for testing.
First: “what are we testing for”?
Referring to point 5 of Decision BS VI-12, we should test for a) practicality, b) usefulness and c) utility of the Guidance, (i) with respect to consistency with the Protocol; and (ii) taking into account experiences with LMOs.
Second: “what are we testing and which tools are we using?”
The MOP decision is clear: we are to test “the guidance”, which is the roadmap plus the specific guidance documents on abiotic stress resistance etc. I believe that it is good to test those specific documents as well, because that will allow us to identify any duplications and contradictions. In fact, given the charge to also look at the manual, we could also include the manual in the testing.
As for “tools”, the key tools in any testing are the questions to be answered, the hypotheses if you will.
Here are some suggestions for questions to be answered for the different objectives of the testing: 1. Questions vis a vis testing for consistency with the Protocol a. Are there parts of the guidance that go beyond the Protocol? b. Are there parts of the guidance that are in conflict with the Protocol?
2. Questions vis a vis testing for practicality and usefulness: a. Are there parts of the guidance that are not easily understood by novice risk assessors? b. Are there parts of the guidance that are ambiguous? c. Are there parts of the guidance that do not provide a clear way forward when applied to “actual cases” (needs to be fleshed out what this means)?
3. Questions for testing vis a vis utility a. Are there parts of the guidance that need further updating on the basis of past and present experiences with LMOs?
Further, we should also provide clarity about the reporting requirements for the testing, so that the results can be compared through various queries.
Piet
posted on 2013-01-23 14:05 UTC by Mr. Piet van der Meer, Ghent University, Belgium
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4061]
Dear colleagues,
first of all I would like to thank Hans Bergmans for skilfully moderating this round of online discussions. I am happy that many interventions and constructive suggestions have been made so far and I would like to add my thoughts as a member of the open-ended online expert forum. When discussing the appropriate tools to test the Guidance I think it is important that we take into consideration what COPMOP6 has tasked us to do and to put the exercise into the context of what has been undertaken before COPMOP6.
I think it is clear from the terms of reference of decision BS-VI/12 that our task is to “provide input, inter alia, to assist the Executive Secretary in his task to structure and focus the process of testing the guidance, and in the analysis of the results gathered from the testing” (quote from para 1a of the annex). The expected outcomes are “moderated online discussions relating to the testing of the practicality, usefulness and utility of the guidance” (quote from para 3a of the annex). I agree with our moderator and those of you have argued that we should suggest as concrete tools as possible for that purpose in order to provide the most useful input to the task of the Executive Secretary.
In addition I would like to remind all of us that with respect to review and testing we are not starting from zero, on the contrary. Between COPMOP5 and 6, in the year 2011, a scientific review as well as a round of testing of the draft Guidance had been undertaken. Appropriate formats and questionnaires had been developed for that purpose and I suggest that we build our work on the experience gained.
I think that a clear, concise and comprehensive questionnaire would be the best tool to fulfil the purposes of structuring the testing and ensuring harmonized feedback about the results. In concrete terms I suggest that the secretariat develops such a questionnaire taking into account the format of the one used in 2011 but amended according to any experience gained during the late 2011 exercise as well as taking into account the questions suggested by Piet van der Meer in his posting of today and other relevant suggestions. I suggest not differentiate between experienced or less experienced risk assessors when developing or applying the questionnaire. On another level concerning the testing tools/methods I suggest that we should not limit the potential and practical testers in the appropriate testing settings: Workshops, face-to-face meetings, competent authorities or risk assessment bodies at the national, regional and subregional level, testing by individuals etc. are in my point of view all possible and appropriate settings to undertake meaningful testing of the Guidance. I have recently been involved in a couple of workshops in different parts of the world (Africa, South-East Asia etc.), where previous versions of the Guidance have been used and tested and I always found workshop settings as particularly useful in order to allow a multidisciplinary and broad application of the guidance and test for its usefulness. Financial implications are no doubt involved, but if certain countries or regions find the possibilities to organise and undertake workshops I would fully support such approaches for the reasons stated above.
I suggest that the whole Guidance Document should be subject to testing, that is also how I read decision BS-VI/12. Thereby the important various interlinkages and references between the different parts of the Guidance could be tested effectively.
Concerning the availability of “actual cases of risk assessment” I would like to echo those who have suggested that these should be real and existing cases. At the same time we should not restrict ourselves to a narrow set but allow the use of those cases the testers find available and appropriate. I suggest that the Secretariat provides information via the BCH which concrete cases are available worldwide and where to find the information. It is not of primary interest if such a real case is a dossier submitted by an applicant or an already existing risk assessment or any other real case available. This will be up to the tester to decide and depends inter alia on the national or regional situation. In the context of our task it is the result of the testing of the Guidance concerning its usefulness and need for improvement that counts and not the result of the risk assessment as such.
I hope that we will be able to assist the Executive Secretary in an effective way in his important task concerning the testing. I also hope that we can approach the testing exercise with a broad, open and neutral view without prejudicing the result of the testing exercise before it has even started.
I am looking forward to the remaining days of discussion and the next steps to follow. Best wishes to all of you
Helmut Gaugitsch Head of Unit Landuse & Biosafety, Environment Agency Austria
posted on 2013-01-23 15:49 UTC by Mr. Helmut Gaugitsch, Austria
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4074]
I would like to support Piet's intervention [#4059] as to me it makes the required process, the objectives, and the possible tool/tools quite clear.
I also agree with Andrew Roberts [#4062] that in the testing process "we will have to carefully ask what information is most useful, rather than just what information is interesting."
I would like to clarify what I meant by saying that if workshops are to be held then they should be 'representative'. I was using the word 'representative' too loosely. What I meat was that different perspectives should be sought, and that care should be taken that participants should not all come from similar backgrounds and perspectives.
Thus, I also agree with the Dr Wei Wei [#4066] (supported by Angela Lozan [#4068])that the input of 'neutral' scientists should be sought (probably more in terms of the testing the additional guidance, rather than the Roadmap. This is because (as Boet Glandorf [#4071] has pointed out), the Roadmap should be tested by risk assessors. I like Boet's suggestion that "other testers may be involved, as long as it made clear what their function and involvement is". To my mind identifying and establishing the expertise and role of the testers will be a fundamental aspect and should be aligned with the objectives of the testing. regards janet
posted on 2013-01-24 16:35 UTC by Janet Gough, Environmental Protection Authority
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4081]
Dear colleagues,
following the very interesting discussion I tend to agree more with Geoff Ridley's #4077 posting, but could accept Boets suggestion that [for me in limited cases] "other testers may be involved, as long as it made clear what their function and involvement is".
I also agree with Janet Gould supporting Piet's intervention [#4059] in clearfying the required process, the objectives, and the possible tool/tools.
Hope this helps
Detlef
posted on 2013-01-25 11:52 UTC by Prof. Dr. Detlef Bartsch, Germany
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4043]
POSTED ON BEHALF OF MARIA MERCEDES ROCA ------------------------------------------------------------ I have read the many contributions made to the forum with great interest and continue to learn a lot from these discussions. I especially thank Hans for moderating this lively and complex discussion. In order to make concrete suggestions to Hans’s questions, I join several colleagues in seeking clarification to define more clearly the following: I. Certain terms and concepts (such as “tools” for testing the Guidance) II. The target audiences (or target groups) that the testing process wishes to reach and help III. The scope/usefulness/ relevance of the Guidance – related to point II. I. Tools. Hans summarizes that “tools” may include: 1. Regional meetings 2. Questionnaires 3. Working through “actual cases”: dossiers sent by developers to functioning biosafety authorities where decisions have been made by particular governments Tom and Didier also suggested a 4th possible “tool” or approach : that the Guidance developed by the Secretariat should be “tested” or compared to other guides and reports on risk assessment. This makes a lot of sense. You may agree that to test anything, it is useful to compare it to similar or related things and then make a decision. It is also useful if you know a lot about what you are testing. Eg. To properly test which manual for car mechanics is the best, you must know a lot about cars and mechanics and compare different manuals. To decide if one, two, three or all four “tools” are to be adopted, it would be useful to also clearly define the target groups for the testing exercise and I sense there is discrepancy on this, that would be useful to clarify. II. Target groups. Is it university researchers and small companies, as well as experienced and inexperienced risk assessors as Sol and David suggest we classify the target audience ? Many (including me) were under the (wrong?) impression that the main target group who specifically asked the Secretariat for help, were inexperienced “regulators” that need to be trained in risk assessment (or should we say in basic risk science, risk analysis and specifically risk assessment of LMOs)? I was under the impression that the current guidance was mainly developed for this key target audience for the following reason: • Countries that have their own, well established biosafety systems (eg.US, Australia, Argentina, Mexico, New Zealand, UK, etc) and have practical experience in auditing and conducting risk assessment for LMOs, don’t need help, but can be the trainers. Small developing countries that are still developing their biosafety systems need this guidance more than others. • Ironically, it is these same experienced countries (or professionals with experience in ERA) that are in the best position to test the Guidance developed by the Secretariat, as they have first-hand experience in using other approaches to risk assessment. III. Scope. Another useful clarification would be to clearly define if the Guidance is mainly developed to train inexperienced risk assessor from governments in auditing dossiers presented by applicants (seed companies, universities, research institutions) or actually to train risk assessors working for governments and developers of the technology in developing countries in conducting a risk assessment. Although the two processes are related, they are not the same, they target very different professionals with different backgrounds and thus, need different training resources and methodologies. The first (audit) needs less complexity than the second (conduct). In this case “one size fits all” approach (one Guidance document) may not work very well. I would gently and humbly ask experts from developed countries, not to loose sight of the time, expense and effort it takes to organize regional meetings and conduct a risk assessment process in poor developing countries, where resources are very limited and professionals (often volunteering their time) have to split their time with multiple responsibilities beyond biosafety issues for LMOs. The context of the cost/benefit for the country, must be carefully considered. A final point regarding language in which the Guidance document will be tested for its usefulness and relevance. All documents are still in English (I understand) and translation into other languages, such as Spanish, needs to be done by professional translators and then audited by professional risk analysts that know the right terminology, so the accuracy, clarity and consistency are maintained in the translated document. This is not an easy task, but unless “testers” can do the testing in their own language, the testing will be faulty.
posted on 2013-01-21 14:24 UTC by Dina Abdelhakim, SCBD
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4044]
Dear all,
Thanks to Hans for framing the questions and moderating the discussions and to all who have contributed so far.
Based on the Decisions I’d anticipated that testing in ‘actual cases of risk assessment’ would involve risk assessors taking a current or recent application and using the Guidance to guide them through the risk assessment process. I anticipated that ‘tools’ would provide a structured framework for testing (i.e. a questionnaire), which would help the secretariat to make sense of the outputs. Hans’ most recent post focuses us towards defining what these questions would be and I agree this describes the issues well. Although, currently I’m unclear about how a workshop would be structured in such a way as to contribute to the testing process?
The point first raised by Piet van der Meer is very important – we need to clarify what we are testing for. Piet provides some useful suggestions for objectives. These objectives could help us frame the questions.
For experienced risk assessors working with an existing regulatory framework, a key question will be whether the Guidance helps with harmonisation. My thoughts on how to practically test the Guidance in this context follow. In this context, applications ought to contain the information specified by national/regional legislation. When assessing these applications against the Guidance there may be areas where the application does not contain information recommended by the Guidance (because it is not required) or conversely areas where the application contains additional information not specified by the Guidance (because it is required).
Firstly it would be useful to ask whether this the case? Secondly, based on experience of risk assessment, are there important omissions or areas where the information asked for is not helpful/necessary in the risk assessment? Thirdly, does the Guidance provide sufficient help when assessing the appropriateness or quality of the information? Answering these questions might help direct revisions to the Guidance. The aim should not be to eliminate all differences between the Guidance and existing regulatory frameworks, but to address those which are important for effective risk assessment. Questions about clarity, consistency etc. would of course also need to be included in the questionnaire.
A different set of questions will be relevant for those who are new to risk assessment/without an established regulatory framework, as this audience would make use of the guidance in a different context. I agree with the most recent post from Dina Abdelhakim that this is a very important audience. It is important that efforts are made to engage this audience in framing the questions and testing the guidance. I don’t think this automatically necessitates workshops, which as noted have associated costs.
It is important that the Guidance is tested against a range of different ‘actual cases of risk assessment’. I’d agree that maintaining flexibility is important and also that sufficient information should be collected on the cases being tested. The BCH could be used to collect early information on which type of application Parties plan to test to avoid all Parties focussing on the same GMO. If the specific chapters are to be effectively tested it is essential that Parties with experience of LM mosquitoes, trees and abiotic stress are involved in testing the Guidance. Again, it is important that efforts are made to engage with risk assessors with this experience if the Guidance is to be tested effectively.
posted on 2013-01-21 17:02 UTC by Dr Katherine Bainbridge, Department for the Environment, Food and Rural Affairs
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4045]
Dear participants
Having been away from the forum for 3 days it is inspiring to come back and find so much discussion.
I would like to explore the point that Isao Tojo [#4026] made: “Also, annex of BS-IV/11 describes one of terms of reference of AHTEG, such that “Develop a "roadmap", such as a flowchart, on the necessary steps to conduct a risk assessment in accordance with Annex III to the Protocol and, for each of these steps, provide examples of relevant guidance documents”. And “According to the above mentioned report and decision, a desired guidance seems not to be a “generic guidance”, but a simple “roadmap”, such as a flowchart with examples.”
This to me is the key, we asked for a road map, such as a flow chart, which is what we got, provided in the guidance as figure 1 (unfortunately prefixed with 16 pages of other notes). Our primary role should be to assess each step in the road map (figure 1) for its usefulness and relevance in conducting a risk assessment. If a step in the road map is not self-explanatory then the user should look at the additional notes for guidance and assess these for usefulness and utility as well. The same process can be used to the specific examples in Part 2 of the guidance document.
The “tools” need to be inquisitorial to determine whether or not the road map provides the step-by-step process for conducting a risk assessment. The tool need only be a questionnaire that quizzes the user on the usefulness and relevance of each step in conducting a risk assessment. The responses of the user recorded in the questionnaire can then be analysed. The outcome of the analysis should be to determine whether or not the road map is useful and relevant. To be useful it need only be a step-by-step process with signposts to other relevant guidance material. There is no requirement for the road map to be self-contained guidance on conducting a risk assessment.
The question of who should be testing the road map has been asked. I think probably everyone. If the tool is fully inquisitorial then those users with little experience in risk assessment will be able to show us quite quickly which steps are neither useful nor relevant.
Having read through the postings I do not get a strong sense that workshops are needed, or useful, especially in these financially difficult times.
I am also of the opinion that if the analysis shows that the roadmap is neither useful nor relevant to conducting a risk assessment that we should reject it and not continue to fiddle with it in an attempt to make it useful and relevant.
Regards Geoff Ridley, Environmental Protection Authority, New Zealand
posted on 2013-01-22 01:49 UTC by Dr. Geoff Ridley, Environmental Protection Authority
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4046]
Thanks to Hans for moderating this session and providing a structure for discussions.
In response to his halfway points I feel that the appropriate tools and questions will be shaped in part by the answer to What is the objective of the testing? (A question highlighted by Piet van der Meer and Katherine Bainbridge). The answer seems to lie in 5(b) of BS-VI/12 “feedback provided as a result of testing on the practicality, usefulness and utility of the Guidance, (i) with respect to consistency with the Cartagena Protocol on Biosafety; and (ii) taking into account past and present experiences with living modified organisms”.
Points (i) and (ii) relate to Piet’s testing for appropriateness. The tool for Point (i) is probably expert opinion (eg using experts that have had crucial roles in drafting the Protocol or published IUCN Environmental Policy and Law Paper No. 46 ‘An Explanatory Guide to the Cartagena Protocol on Biosafety”). The appropriate questions would then ask: a) Which components of the Guidance are consistent with the Protocol? b) Which parts misrepresent the Protocol? c) Which parts do not show sufficient linkage to the protocol?
The tool for Point (ii) is probably either a questionnaire to national competent authorities from countries with the most experience with commercial/general releases of LMOs. The questions could then include: a) Which components of the Guidance have corresponding sections in national legislation, guidance or processes? b) Which components of the Guidance do have corresponding sections in national legislation, guidance or processes? c) Which important relevant components national legislation, guidance or processes are not covered by the Guidance?
With respect to ‘testing on the practicality, usefulness and utility of the Guidance’ (Piet’s test for usefulness) will depend on highly divergent interpretations of these terms. For example practical may mean: • ‘available in the national language’ (as pointed out by Maria Mercedes Roca) but is the subject to the vagaries of the quality of the translation • ‘scientifically feasible’ • ‘capable of being done with few resources and skilled people’ • ‘capable of being done by highly skilled, heavily resourced people, whose results can be adopted by those with few resources’ • ‘capable of using for auditing purposes or doing risk assessment from scratch’ • ‘capable of being applied to GM equine influenza virus’ and so on. Each interpretation of what is meant by ‘practical’ can be equally valid, but will strongly influence the answer of whether or not the Guidance is seen as ‘practical. Therefore it will be important to explicitly elicit these interpretations. (As an aside for those of us linguistically challenged (as in my case), could someone tell me how were ‘usefulness’ and ‘utility’ distinguished at the MOP, are there examples of things that are useful but with no utility and vice versa what have utility but are not useful?).
In response to the call by Hans for testing ‘actual cases’, the tool with greatest validity for testing the practicality, usefulness and utility is a post hoc survey to all national competent authorities and developers who submit dossiers, asking whether or not they used the Guidance in risk assessments of LMOs evaluated in 2013/2014?, which parts were used and what was good or bad when applying it.
posted on 2013-01-22 05:24 UTC by Paul Keese, Australia
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4048]
Dear All thanks for all the valuable comments submitted until now and special thanks to Hans for moderating this round. After having read the different comments and proposals I could figure out two different settings of workshops or a combined concept of “top down” and “bottom up” Bottom up would mean to take the Guidance and use the points to consider of the different parts as your guiding tool. You can define your case (eg bt maize, stacked event with 2 Bt genes or HR plant species) and ask in a WS setting which aspects are relevant in your countries, which points to consider should be applied (or not) which information and data information are needed (or not) etc and come to an informed conclusion on comprehensibility, usefulness and practicability also taken into account the link to background documents. You do that throughout the different steps and points to consider as laid down in the guidance. “top down” would be a real application with all its studies and informations and align these to the different sections of the Guidance especially Road Map and Part 3 on Monitoring ( including where relevant Part 2 on specific traits and organisms). This would help to identify which parts and points to consider are and how they are addressed in the application, if that is deemed sufficient, what seems to be missing what seems to be not relevant in the given case etc With a good matrix like documentation of such WS you are also able to compare evaluations. I would agree that a questionnaire is only second best because it is normally an isolated endeavor but with a predeveloped evaluation matrix you could do that also as desk study . There may be some overarching questions/aspects too: if the guidance helped to structure the way how you deal/dealt with an application or vice versa if your structure and organization of the evaluation of an application would fit into the framework of the guidance, which parts are implemented which possibly not etc. If we want to constructively further develop the Guidance we should go a step further than in the last round. My understanding is that the guidance should help and support risk assessment and risk assessors in the framework of the Cartagena Protocol and its Annex III, experienced ones like the European countries or others may compare their own frameworks with the guidance and share their evaluation, less experienced if and how the guidance supported the development of their own national framework, structure and organization of doing risk assessment. With best regards Beatrix Tappeser , Head of Division GMO Regulation and Biosafety, Federal Agency for Nature Conservation, Germany
posted on 2013-01-22 08:28 UTC by Beatrix Tappeser, Federal Agency for Nature Conservation
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4049]
Dear Hans Thanks for formulating the questions that can help us to identify the appropriate tools to testing Guidance. Thanks to colleagues also for giving comments and interesting ideas. I have the following few comments:
1) The Road Map is to intended to make Annex 111 more comprehensive when conducting risk assessment. Therefore the target audience should be national assessors, developers and regulators with experience. However there should be regional workshops for capacity building on the road map 2) On the issue of aligning the Guidance and Manual the two documents should remain independent, however the manual should be restructured in such a way that its content is used as an explanatory guide to the Guidance. Also the Guidance should be flexible to allow for domestication for national authorities to fashion according to national needs, 3) On the issue of either integrating all of the guidance or just the roadmap section, we should focus on aligning the content of the Roadmap and the manual.
Rufus Ebegba, National Biosafety Office, Federal Ministry of Environment, Abuja-Nigeria
posted on 2013-01-22 11:44 UTC by Dr. Rufus Eseoghene Ebegba, Nigeria
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4050]
Greetings to All. Thank you Hans for carrying out this heavy task of moderating this session as well as providing guidance to the open-ended online forum to give feedback to assist the Executive Secretary with the development of appropriate tools to test the “Guidance on Risk Assessment of Living Modified Organisms” . Also thank you to all for the very interesting inputs thus far.
On deciding on a tool to test the “Guidance on Risk Assessment of Living Modified Organisms” in actual cases of risk assessment, I agree with what has been previously stated by the other participants to this online forum that the objectives of the testing should be clearly identified and in my opinion which is similar to many others is that it should test for appropriateness/relevance and usefulness of each part of the guidance as it is applied or used in a risk assessment. In Paragraph 1(b) it clearly states that “The Guidance will be tested nationally and regionally for further improvement in actual cases of risk assessment and in the context of the Cartagena Protocol on Biosafety”. Here, I would like to suggest the Secretariat requests National Competent Authorities to get their risk assessors to test the Guidance against actual cases of risk assessments that they have conducted so far using other guidance documents or methodologies that have been previously established based on their National Biosafety Legislations. This was suggested by both Tom and Didier and supported by Maria Mercedes. This will then help Parties to give feedback to the Secretariat on the usefulness and appropriateness of the Guidance Document in conducting risk assessments. This will also help Parties, as Boet had suggested, evaluate the different components of the Roadmap (Part I of the Guidance) and give feedback if the Background documents associated are useful as presented and also if the points to consider are sufficient, or not relevant or maybe even missing as suggested by Beatrix.
The questionnaire approach would also be useful to get feedback from Parties, both experienced and not so experienced as well those that have not done any risk assessments but are familiar with the general principles of it. I would like to suggest that besides obtaining feedback on the usefulness or relevance of the Guidance Document, feedback on the clarity of the language use must also be obtained. This is especially important for countries that would like to translate the Guidance into their National Language and it’s not going to be of any help if the original language itself was confusing or not explicit it what it intended to state. The Guidance might also have helped Parties amend their own risk assessment guidelines appropriately to include certain important points that they might have overlooked. This feedback is also important to show that the Guidance even though in its entirety might not be useful for a Party but might have helped a Party in improving certain aspects of their existing risk assessment guidance document.
A pre-developed evaluation matrix as suggested by Beatrix would also be a good option to go for.
As mentioned by others as well, the Secretariat should get parties to test the Guidance against different types of releases into the environment, i.e. one confined environmental release, one unconfined environmental release, preferably for the same LMO.
In choosing the actual cases of risk assessment, I would like to suggest that the Secretariat provide actual cases on unconfined environmental release that are appropriate for a particular region- e.g. for the Asian or South East Asian region, LMOs like papaya, corn, cotton or rice will be useful. The Secretariat then should suggest that Parties should also pick an actual case that they have already assessed for a confined or limited environmental release.
Kind Regards Vila Ministry of Science, Technology and Innovation, Malaysia.
posted on 2013-01-22 12:37 UTC by Vilasini Pillai, Ministry of Science, Technology and Innovation - Chair real-time conference Asia
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4051]
Dear All. I join many colleagues to thank Hans for the clarification of the task this session and guidance to the open-ended online forum to give feedback to assist the Executive Secretary with the development of appropriate tools to test the “Guidance on Risk Assessment of Living Modified Organisms” .
I want to support Pila's suggestion for testing the tools by competent national authorities and various assessors from different parties. Best regards Gado
posted on 2013-01-22 13:25 UTC by Mr. Mahaman Gado Zaki, Niger
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4052]
POSTED ON BEHALF OF RYOSUKE FUDOU -----------------------------------------------------
Dear all I would like to thank Hans for summarizing the comments so far posted on this discussion forum. In my notion, it is not necessary to develop the particular tools for testing the guidance of the risk assessment. If the parties have the experience of the risk assessment of individual LMOs, I would firstly propose them to try review on their procedures of risk assessment according to the current guidance document. We can hopefully find the various points to consider such as criteria of the endpoints, threshold and comparators and so on. I’d like to note that it is also important to survey the data of manpower and costs of risk assessments and risk managements(if available) that were (or will be) required in each national regulatory framework. The time required to do the regulatory process is also important. It is known that the regulatory delay of approval or prolongation of field tests would produce the indirect costs(opportunity costs of benefits forgone), which may have the large impact on the later decision-making process. Best regards R.Fudou Japan Bioindustry Association
posted on 2013-01-22 14:44 UTC by Dina Abdelhakim, SCBD
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4056]
Dear all
I am somewhat at a loss to understand what outcome that we expected to be achieved by having the roadmap and guidance. I read some of the posting and it seems to me that there is a belief that these documents will create capacity where there is none or that a brief training course in using the risk assessment manual, based on the road map, will create capacity. At best the roadmap will provide a rudimentary framework for applying Annex III. In reality those people conducting risk assessment need to have been involved in conducting risk assessments with those who have done them before. It is an apprenticeship.
As for the additional guidance material - it cannot tell you how to do a risk assessment. All it can and should do is be a document to record what has been useful in the past in conducting risk assessments. In the case where an organism is so novel that it has been assessed before then the guidance might be a place to make suggestions as to what might need to be considered in such cases – although we will not know until we actually do one.
As to what tools are? It has been suggested that they are regional meetings, questionnaires, working through actual cases, and comparison with other risk assessment guidance. Sorry but meetings are a venue where we can meet to use tools but we still need to develop tools to use, working through actual cases still requires a tool to do so and is not a tool in itself, and comparing risk assessment guidance will only show you how similar or dissimilar they are not whether one is better than the other. The only tool so far that has been suggested is the questionnaire. So unless someone can come up with another tool then it seems to me that we should be talking about a questionnaire and the nature of the questions it should contain.
Regards
Geoff Ridley, Environmental Protection Authority, New Zealand
posted on 2013-01-23 04:51 UTC by Dr. Geoff Ridley, Environmental Protection Authority
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4057]
Dear participants,
This is the first time I participate in the online forum, and I did not have the chance to attend the face – to – face workshop, so I would like to thank not only the moderator, but all the colleagues who have tried to clarify how we have got here, that was the Guidance written for, and all the work that have been already done. As Ms Li Ching Lim (#4041) pointed out, “we are not starting from scratch”, as it is important to remember that we have to go ahead, and avoid repeating the same testing again. I am also glad to hear from Ms Li Ching Lim that they have been already using the Guidance, that seems to me the best way to test it, as other colleagues have expressed: using the Guidance of actual cases.
Coming back to the questions that the moderator launched at the beginning of this forum, in my opinion the best tool to test the Guidance is to use it in different actual cases.
For doing that, I think we have to use the Guidance in the sense that Isao Tojo (#4026) reminds (from the annex of BS-IV/11), not as a self-sufficient document, but as “a “roadmap”, such a flowchart, on the necessary steps to conduct a risk assessment in accordance with Annex III to the Protocol and, for each of these steps, provide examples of relevant guidance documents”.
I also think that the tool should not be adapted to accommodate different levels of experience, as one of the points of the test in my point of view it is to detect if the Guidance fits different levels of experience. It would be interesting to see if a experienced team of risk assessor could be able to assess a case with the “roadmap” plus “guidance documents” compiled in the “Guidance”.
We could apply the Guidance to different cases, so that we could be able to identify if all of them are covered by the Guidance. The different cases should be evaluated for different teams (that means different level of experience, different regions, different endpoints). The testers should answer a questionnaire covering all the steps of the process and directed to detect if both the flowchart and the “guidance documents” are enough to allow the team to conduct the assessment. In my opinion, only with this homework done, it would have sense to meet in a workshop, that allows the face-to-face discussions of risk assessors.
Kind regards,
Victoria Colombo
Comisión Nacional de Bioseguridad, España.
posted on 2013-01-23 07:33 UTC by Dra Victoria Colombo Rodríguez, Spain
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4062]
Dear Hans, and other participants,
I have been traveling and I realize I am arriving somewhat late to the discussion. I will take the time to thank Hans for moderating the forum, as well as everyone who has contributed so far. There have been many interesting suggestions and I will not try to repeat all of them, but I will try to add my thoughts in a way that might be constructive.
First, I have to echo the comments of Geoff Ridley and others regarding the process, what we hope to accomplish and how the results of the testing will be used or incorporated into further work. I think the previous four years of work by the AHTEG was hindered by a lack of transparency in operation and I don't think that we gain anything by repeating this. It would be desirable to have an understanding of how the data generated through testing will be dealt with. With recognition that at some point we simply have to meet the terms of the decisions taken at the COP/MOP, I think that following the conclusion of these discussions a plan for testing of the guidance should be drafted and circulated.
Before getting to the direct questions posed in the forum, I also need to point out that many of the suggestions being offered (regional workshops, preparing multiple "actual" cases, preparing surveys and performing statistical analysis on the results), may be appropriate but they are also very ambitious. I think, much like when conducting a risk assessment, we will have to carefully ask what information is most useful, rather than just what information is interesting.
I will try to answer Hans's original questions, again without repeating all of what has been said before:
1. I think a survey tool is an appropriate methodology for testing the guidance. However, we will need to design and interpret the results of a survey carefully. If multiple audiences are attending different workshops, considering different case studies and receiving training from different facilitators then it will be very difficult to compare or contrast the results. In essence, it will be hard to determine whether we are testing the guidance, the case study, the facilitator or the workshop format.
2. I believe that right now, testing should focus on the Roadmap. I don't believe that the other pieces of the guidance document are ready for any sort of testing. While I acknowledge that they have been in development for a long time, and no small amount of effort has been put into them, I think they suffer from some fundamental flaws and the most appropriate action would be to test and refine the roadmap, then use this as the basis for the improvement of any additional guidance.
3. see above
4. I agree with the comments of others that a survey does not need to be adapted for testers with different levels of experience. However, it is vitally important that we identify the level of experience so that the results can be interpreted appropriately. If the survey is going to be accompanied by workshops and training, then these will need to be adjusted for the level of experience of participants. Experienced assessors should consider complex cases, while novice assessors should consider simple cases. This is one area where, as a community, we consistently fail. I am not aware of any area of education where we "teach" by dropping novices into the most complicated scenarios we can find and then ask them later if they understand the basic concepts. Again, I think there is a high potential to waste time and energy trying to "test" the guidance using workshops and training activities because of all the uncontrolled variables associated with these. I would recommend instead asking individuals and institutions to consider the guidance in the context of their experience with risk assessment, and apply it to cases as they choose.
5. I think a survey, carefully constructed, should be presented to testers. This should include information collection on the level of experience of the testers, and the context in which they consider the guidance.
Hans also posed some additional questions: 1. I strongly agree with Piet van der Meer that questions should focus on the compatibility between the guidance and Annex III - which remains the only binding description of a risk assessment applicable to Parties. Questions should also focus on whether the guidance meets the aims laid out in the decisions of the COP/MOP that led to the formation of the AHTEG.
2. The challenge in providing actual cases of risk assessment from the Secretariat is that the design and preparation of these cases should be vetted within the AHTEG and these forums to ensure that they are fit for purpose. I think this is too ambitious for the time constraints we are working under. In light of this, I think the testers can make use of cases available to them, and simply provide some information as to what sort of case they used to test the guidance.
Thanks to all who have contributed their thoughts and ideas to the discussion. I do not envy Hans having to try to sort through and summarize the many disparate views presented here.
Best Regards, Andrew Roberts
posted on 2013-01-23 19:09 UTC by Mr. Andrew Roberts, Agriculture & Food Systems Institute
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4067]
Dear Hans; dear all Many thanks. I agree with Piet suggestions for the praticality and utility of the guidance. The testing of the guidance shuld be in accordance with the application of existing cases tested and the experience gained. The use of clear and simple questionnaire should be one of the best tool for the structuration of the testing.
posted on 2013-01-24 10:19 UTC by Mr. Mahaman Gado Zaki, Niger
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4063]
POSTED ON BEHALF OF PATRICIA GADALETA --------------------------------------------------------
Greetings to all the participants and thank you for giving us the opportunity to contribute to this interesting debate on such a complex subject, and also thanks Dr Bergmans for your useful summary and for chairing the forum.
We have read the contributions made to the forum with great interest and we agree with most of them. In our opinion the testings should be performed at the national, regional or institutional levels to combine then different experiences and organizations and a survey questionnaire would be appropriated. Also, workshops may be organized to help organizations that could need them.
It is important that the Guidance be tested against a range of different actual cases of risk assessment to cover the actual situation in terms of species, traits and receiving environments. If it is possible, to better homogeneity and interpretation of the outcomes, the actual cases should be provided by de Secretariat and they should be of public access.
Parties, non Parties and other institutions with experience on LM mosquitoes, trees and abiotic stress should be involved, so it could be useful to have a list of "tester organizations" and to know the type of expertise that they have in order to test these specific cases, aside from organizations or government agencies without expertise on those cases that could also test the Guidance with these actual cases.
As we mention in the on line forum of December, we are using the Guidance to introduce new professionals in the risk assessment work and also to test if the Guidance should be useful to their task, applications of GMO that we are working on. Our preliminary conclusions are that the Guidance is a detailed and complex document that required a special dedication for understanding and processing the information available or to gather, but probably, for novice professional in risk assessment, who are not well familiarized with some issues, it could be difficult to discriminate which is the correct information needed and have access to it, particularly in regard to some subparts of the Part II and the Part III. These required of orientation. So, the coaching given by experience evaluators is very important.
In regard to the last two questions posted by Dr. Bergmans [#4032] and our preliminary observations in the Biotechnology Directorate, we found very suitable the suggestion of questions given by Dr Angela Lozan [#4040] to include in a questionnaire of survey and we suggest a new forum for a brainstorming about specific questions for each part of the Guidance and for selecting the actual cases or to agree about the characteristics and complexity that these cases should have.
Best regards, Patricia Gadaleta Biotechnology Directorate Ministry of Agriculture, Livestock and Fisheries Argentina
posted on 2013-01-23 21:29 UTC by Dina Abdelhakim, SCBD
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4065]
Thank you Hans, for moderating and directing this discussion.
There have been a few main suggestions for methods of testing the Guidance, including a questionnaire and a workshop, and the questions to direct testing for each will be different. I agree with Belarus (#4047) and Malaysia (#4050) that a questionnaire would allow for breadth in the testing process, suitable for surveying a wide array of actors working within different frameworks, including regulators with varying levels of expertise and members of civil society who use Annex III. Just focusing on experienced regulators, or even normalizing the Guidance to existing RA frameworks as has been suggested, reduces its utility to sovereign countries building a RA framework that suits their individual needs.
But I also believe that a workshop would be valuable because it provides necessary depth, as a non-prescriptive and open-ended method of testing the Guidance as a whole, used in an actual case of risk assessment. As Hans said (#4032), “the testing needs to take into account one, or probably more than one, ‘actual’ cases of risk assessment.” While some have expressed concern that a workshop would be cost prohibitive (#4043, 4045, 4054, 4055), there is already support from Moldova and Belarus (#4046) for conducting a regional workshop. Straying into issues of cost goes beyond the mandate of this forum as all means of testing have costs. As pointed out by Helmut, leave those issues to the countries that decide that they want to test using a workshops.
The questions should focus on the utility of the Guidance in a range of contexts, testing the process of using the documents above the actual result of the RA. We should also address, per last forum’s discussion, whether they are internally consistent or in need of further integration. The overall usefulness of the document as a whole should be tested, not a line-by-line discussion of each phrase taken out of context. The Guidance in full should benefit from testing, but testing the manual was not requested by the MOP.
It seems logical that, with testing across diverse stakeholders, different parts of the Guidance will be reported as having greater utility. It doesn’t need to be useful for everyone in all the same ways without being worthy of retaining as a whole.
posted on 2013-01-24 02:58 UTC by Dr Dorien Coray, University of Canterbury
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4066]
Dear All,
Following my previous intervention, I would propose a scientific approach for the testing. As Helmut pointed out that the last version of Guidance had undergone a review process by scientists (so called peer-review process), I would suggest to have all the current documents, the draft questionnaire used to test Guidance and the test results and reports reviewed by scientists with academic background of ecology, environment and biosafety, especially who have published high impact papers in peer-review academic journals in ecological assessment of LMOs. I have one famous name of scientist of biosafety in my hand at this moment; it is Prof. Allison Snow from Ohio State University of USA. I can provide other names if necessary. However, plenty of names and addresses of appropriate candidates can be easily found from their publications and I believed that they would be happy to help as the biosafety research is within their main tasks. It will be very helpful to have them too during any workshop. This way can lead to a scientific way other than politic one. Perhaps politics will definitely come for the final negotiation at the meeting of parties during MOP7, however, probably we do not need too much politics at this stage, but the decision is up to the secretariat.
Further, I agree to David Herson’s suggestion to test the guidance by organization and institutions. That is a good way to ensure the scientific manner and a great addition to the test. However, we might need to be careful for the nature of the organizations and institutions. Many of them are neutral in the debate, focusing on the safety of environment and the health of people other than profits made from biotechnology. In contrast, there are some organizations and institutions are founded or indirectly founded by agencies that making profits (money) of or from the commercialization of LMOs, their testing results we should be very careful to, although their opinions are important that we shall have, there is a risk that they do not have ‘a neutral view without prejudicing the result of the testing exercise’. In addition, if there are some workshops or face to face meetings available we can have the attendance of neutral scientists working on biosafety. For any coming results of test, the Secretariat shall try to analyze with again the help of scientists working on biosafety to avoid any diversion from our main focus caused by any biased views.
Although this is a pure view of science somehow ‘optimistic’, our task is to achieve such an aim of perfect output. I look forward to any comments and suggestions to improve this scientific approach for the testing.
Best wishes
Wei
(edited on 2013-01-24 03:51 UTC by Wei Wei)
posted on 2013-01-24 03:48 UTC by Mr. Wei Wei, China
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4068]
Dear All,
I fully agree with WeiWei. The independent research organizations and institutions should be involved in the testing of guidance, as well as the the "neutral" researchers invited to workshops and meetings. That will be helpful to lead a scientific way for testing.
posted on 2013-01-24 11:08 UTC by Angela Lozan
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4064]
Dear participants of this Forum,
We are getting near to the end of this discussion, and I am very happy that so many of you have taken the time and effort to present their opinions.
In this posting I would like to point out a few things for clarity. These are issues which I came across as I was reading the postings of the last three days (up to 20:00 my time today). They are not meant as conclusions – they cannot be as we are still actively discussing the issues.
I may have caused some confusion when I was stressing the concept, put forward by a number of participants, that there will workshops to perform the testing. There should clearly be flexibility in the methods used: workshops, face-to-face meetings, activities of competent authorities or risk assessment bodies at the national, regional and subregional level, but also testing by individuals have all been mentioned as possibilities for performing the testing, and this list is not complete.
In my personal experience the thought process needed for this kind of testing exercise is much more efficient if it is done together with some colleagues. This is what I meant with the term “workshop”: a meeting of a few people that work together hands-on on a project: the testing. The “workshops” that I envisaged are therefore not necessarily the type of large meetings with a lot of experts being flown in from different places. A workshop would also not consist of lectures or a general exchange of experience. It is clear that the more elaborate a workshop, the higher the costs, especially at the regional level.
I have confused the terms methods and tools. Workshops etc. are a method and not a tool. Most of us think that a questionnaire is an appropriate tool to guide the testing process. The testing will be on “the practicality, usefulness and utility of the Guidance, (i) with respect to consistency with the Cartagena Protocol on Biosafety; and (ii) taking into account past and present experiences with living modified organisms” (BS-VI/12), and several suggestions have been made by several of you for (the types of) questions for these testing goals.
Even then, a term like ‘practicality’ will have different meanings to different testers, as there are different aspects of the risk assessment process inherent to the term. It is probably wise to let the testers make their own choices of how they interpret the term. The questionnaire should ask them to state their interpretation of the terms explicitly. Also, whether the testing approach should be bottom up or top down is probably up to the testers, or to the Parties. Testers should be aware of both approaches.
One additional goal of the testing has been suggested: the Guidance could also be tested in comparison to other guidance, and in comparison to the tester’s own legislation. In this context the testers should also remember that a main objective of the Guidance is to provide a structured framework facilitating access to other documents useful for the risk assessment of LMOs (accessible through the Biosafety Information Resource Centre and the Scientific Bibliographic Database on Biosafety).
There should not be any confusion about what the aim is of the testing: testers are not asked to show that they can do a risk assessment. The testers’ reports will be on where the Guidance was or was not practical, useful and where there was or was no utility, and how improvements can be made if they are necessary. Or: “whether the Guidance was able to assist in the risk assessment, how it could be more effective in places, whether it raised issues that the testers may have missed otherwise, whether it helped point to useful resources (e.g. through the links to the background documents), whether it helped organize the risk assessment process appropriately and what challenges remain.” (Lim Li Ching) The questionnaire should address both ‘experienced’ and ‘inexperienced’ testers. The test results of ‘inexperienced’ testers are at least as important for this purpose as those of more experienced testers. BS-VI/12 says that the Secretariat is mandated to provide a report on possible improvements to the Guidance. I therefore propose that it is of prime importance that the questionnaire also asks respondents to indicate how improvements could be made in the Guidance if they point out deficiencies.
An important issue is the ‘actual cases of risk assessment’. There is a large diversity of suggestions on what cases would be appropriate or necessary to be taken on board in the testing. I propose that the questionnaire could have a preamble that gives an overview of the considerations that have been put forward. As a short summary: the cases should (or could) comprise confined as well as unconfined releases, releases of different kinds of LMOs, and in order to be appealing there should be different, specific cases for different regions. Testers can use their own cases, of course, but the Secretariat should show the way where cases can be found, and could probably prepare some cases. Cases may be ‘graded’ from ‘easier’ to ‘more complex’, and suggestions have been made to that effect.
Finally, I want to thank you all very much again for all your efforts. It was an honour and a big pleasure to moderate this Forum.
Hans Bergmans
posted on 2013-01-23 22:26 UTC by Mr. Hans Bergmans, PRRI
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4069]
Dear participants,
First I would like to thank Hans Bergmans and all the participants for the interesting discussion in the forum along the last two weeks.
I will present my opinion regarding the questions pointed out by Hans and based on the comments and ideas already posted in the forum.
I see the Guidance with “two distincts uselfulness”: to align RA concepts and procedures according to the Annex III of the Protocol for countries that already have a GMO biosafety framework or to support development of their own regulations/procedures for the countries that don´t have a legal mark. But maybe I´m wrong and I hope the questionnaire/survey can better show the real uselfuness and relevance of each step described in the Guidance. So in my concept a tool is basically a questionnaire that can be used in different situations: in national, regional or subregional levels through meetings, workshops, individually etc. “The tester” should be “Parties, other governments and relevant organizations, through their risk assessor and other experts who activelly involved in risk assessment”, according to Decision BS-VI/12, paragraph 3.
So the idea of a questionnaire doesn´t excludes the idea of a face to face meeting. But in my opinion we should have this questionnaire to standardize the outcome of any effort testing the guidance and to point out the aspects we agree that are important to capture from these testing. The important part will be to make the “right questions” in the way that these questions will allow us to reach the desirable goal: “further improvement” of the Guidance. Answering the two questions proposed by Hans:
- What (type of) questions are needed to focus the discussions of the testers
I think the proposed questionnaire can have two types of questions: one that allow an evaluation of more general chapters in the Guidance (such as overarching issues) and,in this sense, a score with comments will work and another type with more general and open questions. Some of these "open questions / points" were already enumerated by other participants and I´d tried to summarize some examples bellow:
1 - Questions to contextualize how the testing was performed and by whom: a) Which actual case was used b) If the testers are Parties, non Parties or organizations, with or without experience c) If the testers are evaluators in an establish regulatory systems or academics not bound by formal rules d) If the evaluators have done risks assessment to applications before 2 - Questions about praticality usefulness and utility of each step described in the Guidance: a) Is the flow chart clear, uselful and relelevant in conducting a risk assessment and do you understand the logic described in the steps of the risk assessment in the Guidance? b) Are there parts of the guidance that do not provide a clear way forward when applied to actual cases? c) Are there parts of the guidance that are ambiguous? d) Can you locate the necessary information in the application? e) Is it clear in the Guidance the criteria for the quality of scientific information to be used in the risk assessment?
3 - Questions about the source of information in the Guidance: a) Are the Background documents associated useful as presented? b) The Guidance adequately points to the most important and relevant sources of information? (if not identify the missing sources of information) c) Are other guidance documents than those being tested used for risk assessment?
4 - Questions about the Guidance consistency with Annex III of the Protocol: a) Are there parts of the guidance that go beyond the Protocol? b) Are there parts of the guidance that are in conflict with the Protocol?
5 - Questions to evaluate if the method described in the Guidance is compatible with the way risk assessment is normally performed in his legislation system: a) Which components of the Guidance have corresponding sections in national legislation? b) Which relevant components of national legislation are not covered by the Guidance?
6 - Questions about the language/terms used in the Guidance: a) Is the technical language understandable? Proposals and suggestions b) Is the glossary useful? c) Which terms in the glossary should have a better definition?
7 - Do you intend to use the present Guidance for risk assessment at your country?
8 - What are the benefits of using the present Guidance?
I also remember, as pointed out by Lim Li Ching, that the Guidance has been already tested and the result of this round of test can give some evidence about the right questions to achieve the objetive.
- What guidance could the testers need to choose the actual cases of risk assessment; should the Secretariat provide actual cases of risk assessment that could be used, or, in addition, should the Secretariat show the way where actual cases of risk assessment can be found.
The choice of an actual case to be analysed can be open to each tester´s choice based on their previous experience. A Secretariat can provide some actual cases in the context of Cartagena Protocol, maybe with different types of LMOs, intendend use and levels of complexity, but I believe if someone wants to use another real dossie is also an option. Having all the commented points on mind, I agree with Sol, hoping that the report of the test will show the parts of the Guidance that are not useful in the way they can be improved. And also agree with Hans at his statement “...is very important so that a report can be made of the test results, in a transparent manner, that can be used by the Secretariat to analyze, also in a transparent manner, the feedback and provide a report on possible improvements to the Guidance” (also according to the Decision BS-VI/12, Paragraph 5(b)).
Best regards, Luciana P. Ambrozevicius Ministry of Agriculture, Livestock and Food Supply / Brasil
posted on 2013-01-24 11:52 UTC by Ms. Luciana Ambrozevicius, Brazil
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4070]
Thank you Hans, for chairing this forum. It is a pleasure to be invited to participate in this very worthwhile and interesting discussion.
I have yet to participate on this forum, but I would like to comment on a few threads now.
I agree with Sarah Agapito's post (#4014), especially where she suggested that we should align the Guidance document with the training manual using one dossier (to help us to compile the results and interpret them) using workshops, meetings and possibly an on-line forum.
I also agree with Jack Heinemann in his post # 4017 in reply to Geoff Ridley's post #4016 when he says that the sentences that are mentioned by Geoff in the Guidance document DO highlight some important issues in that are highly pertinent in any assessment of an LMO. The biosafety of LMOs is only a few years old. As it is a young and developing area, I expect that there will be an ongoing debate in the scientific literature for some time yet about how best to proceed with a risk assessment and what constitutes a risk and what does not. Given this situation, it is important that any risk assessment of an LMO uses the latest-available science as it develops, and I believe that the Guidance document fully captures this intention.
Furthermore, it is true that the Guidance document is not prescriptive as to the nature of the statistical tests that should be done, the sample sizes that should be accepted etc. As a biostatistician (amongst other things), I believe the discussion as to which statistical tests, samples sizes etc that should be acceptable in a risk assessment should be left to a discussion of experts in that field, perhaps at the next AHTEC as Jack Heinemann suggests, and I am happy to lend my many years of expertise to that discussion, if that is deemed appropriate. I disagree with Geoff Ridley (# 4024) and Janet Gough (#4028) when they state that doing this goes beyond guidance. Guidance can include statistical guidance. Many of the dossiers developed for risk assessment of LMOs contain a considerable amount of statistics. Statistics is a specialist field that some people in some regulatory bodies may not have studied, or may otherwise feel uncomfortable with, and those people may value some guidance in the area.
I am further grateful to Angela Lozen of Moldovia (post #4018) to offer her country to host a testing of the Guidance.
Thomas Hickson (#4021) has suggested that the Guidance could be tested using a questionnaire. This has been supported by some others. I am confused as to how this could be effective. I am not aware of any regulator that does a risk assessment on an LMO using a questionnaire. Therefore, a questionnaire therefore does not match real-world risk-assessment experience. Furthermore, as someone who has written and analysed numerous questionnaires in my career, I would like to strongly support Leighton Turner's view (#4030) that questionnaires are much less “explorative, inclusive and informative” than a workshop, meeting or forum. Questionnaires can be quite prescriptive tools, used when you have already developed strong, defined hypotheses from earlier work and wish to test those hypotheses in a specific way to obtain quantitative data about eg the proportion of participants that do “x” or think “y”. Questionnaires are very poor tools to obtain the sort of information that is required here. That sort of information is best obtained though discussion, such as in a forum or workshop. And as Jack Heinemann has pointed-out (#4025), the Guidance has already been tested by survey/questionnaire. In my view, a workshop, meeting or on-line forum would be the most inclusive and effective means of testing the Guidance, Moldova has offered its assistance in facilitating this, and I believe that their kind offer should be accepted. If a questionnaire is used, I believe that it should be a series of questions that prompt discussion in a workshop setting rather than being prescriptive yes/no or tick-box questions as these tend to dampen discussion in my experience.
I further agree with Leighton Turner (#4030) and Wei Wei's (#4029) that we should have an open mind and test the Guidance to determine how successful it may be rather than argue that it could not be successful. As an evidence-based researcher, I obtain evidence as to whether something is or is not successful, rather than deciding in advance, and without evidence, that it is not successful.
In my view, the only way to test how useful the Guidance is, is to use the Guidance on an actual LMO. In this respect, I agree with Victoria Colombo (#4057) and several others. However, I disagree with David Heron's suggestion (#4031) that the LMO chosen should be one that a country has already evaluated. Different countries evaluate LMO's differently, and different countries have different climates, native flora and fauna etc to consider in a risk assessment. Therefore, one country's assessment may not be appropriate for another country. The workshop/group testing the LMO should be free to make it's own assessment of the LMO using the Guidance without feeling any pressure to follow any country's previous determination. The aim is to test the Guidance, not how to compare the workshop's findings to any particular country's existing protocol, or various countries' varying protocols.
Hans Bergman (#4032) has argued for testing a less common LMO. However, common things being common, the Guidance is most likely going to be used on more common LMOs than rarer ones. It would therefore be wise, in my view, to test it against a common LMO to ensure it works well under that situation. Therefore, the guidance should be tested on, for example, a GM crop rather than say a GM mosquito, because hundreds of GM crops have been developed, but very few GM mosquitoes!
When testing the Guidance against an LMO, I also believe that it is important to include people who have skills and experience in assessing risks to biosafety. In particular, I believe that it is important to include scientists with training and experience in areas such as ecology, environmental science, toxicology and human health assessments; and scientists with hands-on research experience in assessing the risks of LMOs to health and the environment.
posted on 2013-01-24 12:24 UTC by Dr Judy Carman, Institute of Health and Environmental Research
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4071]
Dear all, Thanks to Hans and all the participants for a very interesting discussion on this topic.
I agree with a number of participants that the most appropriate way of testing the usefulness of the Guidance seems to be the testing of actual cases, which should preferentially be publicly available. Although a number of actual cases were suggested for this exercise, it seems to me that at least a few cases should be covered by all testers. This will allow us to directly compare the comments and suggestions for improving the Guidance done by testers from different regions and with a different level of experience for a few specific cases. This exercise could give us valuable information on how the Guidance is perceived by the different testers and how the Guidance could be improved to fit the need of the actual (or future) users of the Guidance.
I foresee that the testers would not include scientists, but merely risk assessors who are responsible for -or are contributing to- the risk assessment of GMO’s in their country. However, also other testers may be involved, as long as it made clear what their function and involvement is.
As suggested by several contributors to this online forum the reporting of the results of the testing of the Guidance could be done by a answering a survey or a questionnaire, containing the relevant questions on problems encountered during the actual testing, missing aspects and suggestions for improvement of the Guidance.
posted on 2013-01-24 13:31 UTC by Ms. Boet Glandorf, Netherlands
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4072]
Dear All I agree with Boet that it will be helpful to have the same actual case done by different tester. But in addition I think it may be very helpful to to have a number of cases of different species and traits (if feasable). That will help to evaluate the utility and usefulness of the guidance and the recommendation that it should cover all LMOs. it would also help to identify possible gaps in our background document compilation I like to underline once more that workshops may be the most helpful. I would not have the requirement of those being representative. The charme of having workshops lies for me in the possible combination of people with different expertise as has been addressed in a number of submissions. And I think a diversity of approaches could be also quite supportive for our task. best regards Beatrix
posted on 2013-01-24 15:25 UTC by Beatrix Tappeser, Federal Agency for Nature Conservation
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4073]
Dear participants of this Forum,
Thanks for all, especially for Hans who moderated this interesting discussion. I do apologize to enter just at the last minute but I was out and could not participate until yesterday when I spent some hours reading and organizing myself about the answers already posted. I will try to be short and not to repeat any previous statement of proposal already posted.
After the message of Hans from Jan 23rd, when he organized a little bit more the discussion, I would like to say that to my perception the questionnaire will be a very useful tool and working through local workshops (a few people around a table) will be one of the best choices to answer such questionnaire (each country could organize itself the best way to answer). Objectives / main goals of such questionnaire are described in Hans post by 23rd (besides other previous posts where he mentioned it). Such goals of the questionnaire should be well highlighted at the very beginning of the questionnaire.
For me, the biggest challenge will be to elaborate the questionnaire in such a way that the treatment of the answers could be easily performed. Even though I still think that a questionnaire is the best choice now, under the conditions of budget, size of the meetings and time.
Regarding the “cases of risk assessment”, I think it should comprise confined and unconfined releases, different kinds of LMOs, and if appropriate for each country, specific cases that fit their interests (local or regional level). Surely the Secretariat should offer cases as suggestions to the participants.
To finalize I think that each of the Parties (experienced or not) will involve experienced decision-makers and other practitioners as much as they feel it is appropriated to their process of decision using the Guidance and the Roadmap.
Thank you very much for all those interesting points presented
Deise Capalbo Embrapa Environment
(edited on 2013-01-24 16:22 UTC by Deise Maria Fontana Capalbo)
posted on 2013-01-24 16:22 UTC by Dr. Deise Maria Fontana Capalbo, Brazil
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4080]
Dear all,
I would like to add some suggestions to the questionaire as a most appropriate tool in my oppinion.
Regarding consistency with the protocol the questions could read:
1. Which part of the Guidance is in contradiction with what part of the Protocol? 2. Should it be deleted or reformulated, and how?
Regarding usefulness:
3. Which part in Guidance is undefined or ambiguous? 4. Should it be deleted or reformulated, and proposal?
Regarding utility:
5. Which part of the Guidance needs further updating on the basis of past and present experiences with LMOs?
6. Are there parts of the Guidance (especially in II PART on specific LMOs) that are already considered in the Roadmap and don't give any relevant new answer/information and could be removed?
7. Is there some terms that need better definitions to be more understandable, and proposal?
8. Is there need for more Background documents for some steps to be better conducted?
9. Is your overall opinion of the Guidnace positive, is it useful, are you inted to use it for RA in your country?
Thanks and regards, Jelena
posted on 2013-01-25 10:49 UTC by Jelena Žafran Novak, Croatia
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4075]
Dear All
Many thanks for all the valuable comments submitted until now and special thanks to Hans for moderating this round of discussions.
I think the roadmap and the guidance documents need to be tested by all the targeted audience groups. Bearing in mind that the testing need to be adapted to meet the various levels of testers’ experience, different specific testing tools for the different groups of target audiences may be needed. Myself I think that testing through questionnaires and conducting regional workshops to test using actual cases of risk assessment are not mutually exclusives, but in fact both approaches might be needed. The most important thing is to get and compile the responses from all the different targeted audience groups on the usefulness of the so far produced guidance materials and how they can be improved. Those comments and suggestions from the target groups would provide a more clear view on where the documents stand right now and what needs to be done.
posted on 2013-01-24 17:02 UTC by Mr. Ossama Abdelkawy, Syrian Arab Republic
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4076]
Dear All:
I would like to confirm the MOP-VI decision that Parties decided to test the Guidance, and this forum does not have mandate on new avenues that some may propose here, but the Parties did not give such an expanding mission to the forum(s). I totally disagree that the forum should not go independently to generate brand new issues with the name of brain-storming.
I agree with Andrew Roberts [#4062]on the past processes, but I should not go back the issue.
I would agree with Wei Wei [#4066] (on invitation of “neutral scientists”, but I also would concern to start again from the square one, as the Guidance doc was built upon after the lengthy four years of appeasement of different approaches and views on RA. At the present point of discussion, a consideration of listing the type of potential testers who have different comprehension or preparation on conducting RA, and giving specific questions to these different categories of testers on different cases.
As for the point by Ossama Abdel-kawy [#4075], elaboration maybe needed on how to get feedback effectively from the users as there is limitation of numbers of participants at workshops and also reply by nets is not helpful for many developing countries on physical system weakness. Also workshops may be made of a specific sector and audience categories may be limited.
I am on the same line with Piet Van der Meer [#4059]followed by Janet Gough [#4074]to specific direction of this discussion to identify specific testing questions including what type of testers (RA assessors) would test the Guidance. Specific cases of such testers and the specific testing questions for them, would elaborate, and arbitrary or general braod applicability should be refrained as the trials will dilute the focus of the testing mission and expected outcome.
Kind regards,
Kazuo Watanabe
posted on 2013-01-24 23:24 UTC by Prof. Dr. Kazuo Watanabe, University of Tsukuba
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4077]
Dear participants Rereading the post it strikes me that many participants appear to think that the guidance is a how to do a risk assessment. It is not.
What the guidance is, and what it should always be, is a structured resource of advice and learnings from those Parties that have conducted risk assessment to meet their Cartagena obligations. At the end of the day if the guidance is not relevant and useful to the Parties to achieve this goal then it is a waste of space.
The task that we are discussing is what tool we can develop to tell us whether or not the current guidance can began to fulfil our need. At this stage the only tool that has been suggested is a questionnaire. All the other suggested tools, i.e. workshops, are not tools.
Posting #4067 said that the questionnaire should be “clear and simple” which I agree in principle but it is more important that the questions be sufficiently inquisitorial to allow robust analysis – its role is to deconstruct the guidance and examine it for relevance and usefulness. Such a tool might be used in a workshop but I think most of those using the tool would not find the atmosphere of a workshop conducive to the task. At best a workshop might be useful to shape the questions to ensure they are inquisitorial so that robust analysis can occur.
Posting #4049 clearly states the target audience is “national assessors, developers and regulators with experience” and I agree. To include anyone else is likely to tell us what we already know – those not routinely involved in doing risk assessment do not understand the guidance. What we were asking for at MOP6 was did those who routinely do risk assessment see relevance and usefulness in the guidance. I do not agree with #4068 that “independent research organizations and institutions should be involved in the testing of guidance, as well as the "neutral" researchers invited to workshops and meetings” if they are not routinely involved in risk assessment.
Several posting have included comments on the neutrality of testers (#4066, #4068, #4059). I don’t think that it is so much neutrality as integrity of the testers that is important. The testers should fairly test the process without having a predetermined goal of developing a guidance that will guarantee the outcomes they want.
Much has been said about what cases should be used. I am of the opinion that it can be already decided case, cases where there is a lot of information, and also cases where there is little information. I see the advice in the guidance helping to determine what information might be needed in obscure proposals and influencing the collecting of the data that will be needed to do the risk assessment.
Finally, I want to re-emphasise that the guidance is for the national assessors, developers and regulators of Parties that have conducted risk assessment to meet their Cartagena obligations. I am also aware that there are budget constraints on most Parties so there needs to be maximum value for the resources that are to be expended in fulfilling our task. We also need to bear this in mind when large naive work programmes, well beyond the mandate of the project, are proposed (e.g. #4070)
Regards Geoff Ridley, Environmental Protection Authority, New Zealand
posted on 2013-01-25 00:39 UTC by Dr. Geoff Ridley, Environmental Protection Authority
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4078]
Dear Hans, Dear members of the forum
To the question of who the target audience should be, I note that there is a mix of views expressed in this forum. Importantly, the MOP did not say that it should be specifically and only “national assessors, developers and regulators”. In fact, it is not at all clear to me why developers would automatically have higher need of, competence in, or status in national biosafety frameworks than other stakeholders not in this select list. If they do, it would be on a country-by-country basis. But I am sure that they don’t in the legislation of all countries.
The suggestion that those not in this select grouping are “not routinely involved in doing risk assessment” is without foundation as far as I can tell. To further suggest that those not in this select group would therefore “not understand the guidance” is to predetermine the outcome of the testing, and to make assumptions that competence is a simple extrapolation of the number of times one does something, rather than, for example, the number of times one does something well.
I agree that integrity of the testers is important and I believe that is what #4066 and #4068 were saying. I hope it was inadvertent to imply that those who are not national assessors, developers or regulators have no integrity, or that I have misinterpreted what was said.
I also disagree with the implication that those in the selected three groups are by definition free of the possibility of having “a predetermined goal of developing a guidance that will guarantee the outcomes they want”. We have to acknowledge that anyone, including those in the regulatory community, can at anytime have formed predetermined goals or may appear to others as having formed predetermined goals. What the posts were suggesting is that there are obvious conflicts of interest (monetary) that should be taken into account, and certainly should not be seen as making some stakeholders more qualified to assist in testing than other stakeholders.
best wishes Jack
posted on 2013-01-25 03:27 UTC by Mr. Jack Heinemann, University of Canterbury
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4082]
Dear all
Even after Jack Heinemann's reply (#4078) to Geoff Ridley's post (#4077), which clarified some points for me, I still unfortunately remain quite confused about some points that Geoff Ridley raised. As others may also be confused, I wonder if I could please ask Geoff to provide a fuller description of some of those points.
First, could he please explain why he sees a questionnaire as being a tool but not a workshop?
The following statement of his also has me confused: “Several posting have included comments on the neutrality of testers (#4066, #4068, #4059). I don’t think that it is so much neutrality as integrity of the testers that is important”. However, it is my understanding that a tester is either (1) neutral or (2) not. A tester can't be a bit of both. It's like the old joke about being pregnant – you can't be a bit pregnant – you are either pregnant or you are not. If a tester is not neutral, then surely, by definition, he/she is biased and cannot have integrity. Surely only neutral testers can have integrity and test the process fairly and properly?
Finally, when discussing the cost of testing, Geoff Ridley refers to my post (#4070) when he says “... large naive work programmes, well beyond the mandate of the project, are proposed.” Again, I am confused how my posting referred-to, or suggested in any way, any large work programs.
I look forward to Geoff Ridley's reply
Best wishes to all
Judy
posted on 2013-01-25 12:29 UTC by Dr Judy Carman, Institute of Health and Environmental Research
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4079]
Dear all:
Thank you Hans for the excellent work guiding us here on the online forum.
As the forum is drawing to a close, I will merely echo here the most important points to me that have already been raised.
The guidance is to be tested and improved. This is clear from the decision taken in Hyderabad. We are not undertaking this exercise, after many years of developing a guidance, to test it in order to throw it away. Nor is the exercise to go line-by-line through the guidance, which would take the guidance out of context (#4065) and distort its evaluation.
As Hans has said, "The testing will be on “the practicality, usefulness and utility of the Guidance. ... The testers’ reports will be on where the Guidance was or was not practical, useful and where there was or was no utility, and how improvements can be made if they are necessary. ... BS-VI/12 says that the Secretariat is mandated to provide a report on possible improvements to the Guidance."
Evaluation is generally done both to identify where improvements need to be made, but also where the content or process is exceptionally helpful. The testing should provide both types of information.
So how do we test it? Certainly, as Hans has suggested and many others have agreed, face-to-face interactions between colleagues, walking through the guidance, would be ideal. I agree with the sentiment conveyed by Hans when he said that:
"In my personal experience the thought process needed for this kind of testing exercise is much more efficient if it is done together with some colleagues. This is what I meant with the term “workshop”: a meeting of a few people that work together hands-on on a project: the testing."
Beatrix (#4072), Boet (#4071), Helmut (#4061) and others have proposed the range of LMO cases that could be used in the testing of the guidance. Clearly there may be some LMOs that should be reviewed wherever the testing is taking place. Countries and regulators may also have some nationally specific LMOs they would want to also use for testing.
Who should do the testing? I agree with Ossama (#4075) and others that all targeted audience groups are the ones who should be involved in testing, not merely regulators.
How do we collate the results of the testing at national and regional levels? A questionnaire is a good way to gather information. Helmut (#4061) has suggested what seems to me the appropriate way forward -- that the secretariat develop a questionnaire building on the questionnaire used in the previous round of testing.
Thank you all for a stimulating discussion.
Regards,
Doreen Stabinsky
posted on 2013-01-25 03:39 UTC by Dr Doreen Stabinsky, College of the Atlantic
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4083]
POSTED ON BEHALF OF HIROSHI YOSHIKURA ----------------------------------------------------------
What has been agreed in CBD is that the risk assessors conduct RISK ASSESSMENT in compliance with the Cartagena Protocol and Annex III. If the current guidance under discussion has been useful for risk assessment using Annex III, it is good, but if it is adding further complication and ambiguity, use of the guidance may cause further complication in the conduct of risk assessment. What we have to check is this point, in my view.
I fully agree with Dr. Geoff Ridley’s comment, “The task that we are discussing is what tool we can develop to tell us whether or not the current guidance can began to (sic) fulfill our need. At this stage the only tool that has been suggested is a questionnaire”.
In my first posting, I listed some of items that could be incorporated into the questionnaire. They are;
1. Whether the respondent has ever used or tried to use the current guidelines under discussion; 2. Respondent’s evaluation as regards usefulness in clarifying any problems encountered during the use of Annex III. 3. Current scheme of risk assessment and risk management in the respondent’s country 4. Manpower required 5. Cost and time (time and money are very important components as pointed out by Dr. Fudo, because the risk assessment should be done in timely manner with reasonable budget. If it takes years and years, the assessment results will be outdate under the current rapidly changing environment and biodiversity)
In relation to the third bullet point above, I add further observations as follows;
As I already indicated long ago in this forum, codex alimentarius is using the scheme called “risk analysis” consisting of risk assessment, risk management and risk communication. Risk assessment and risk management are defined as distinct processes and risk communication as continuing process during risk assessment and management. See Codex Procedural Manual for more precise wordings (page 112 in 20th version).
Most importantly, transparency is crucial throughout the processes.
Now back to risk assessment in CBD. When the risk assessors use the Annex III, the qualification to paragraph 8, “as appropriate” and that for paragraph 9, “depending on the case”, are crucial but difficult in decision. They are crucial because they allow practical and flexible application, but difficult because different people will think differently. Agreement could be obtained democratically only through conversation (risk communication), which allows democratic conduct of risk assessment/management. Actually, in Japan, this part has been one of the most difficult processes, in my view.
CBD may need more clarity on the whole structure of risk assessment/risk management/risk communication.
Hiroshi Yoshikura Adviser, Food Safety Division, Ministry of Health Labour and Welfare
posted on 2013-01-25 14:24 UTC by Dina Abdelhakim, SCBD
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4084]
POSTED ON BEHALF OF MARJA RUOHONEN-LEHTO --------------------------------------------------------------- Dear colleagues,
Thanks again for good discussions and special thanks to Hans for moderating the discussion. I am very happy for all constructive, practical suggestions made by many of you. Our task is to test the practicality, usefulness and utility of the guidance, with respect to consistency with the CPB, and also take into account past and present experience. Clear and straight forward!
The best way forward is definitely interactive, face-to-face working. Be it a small group of colleagues, a meeting with more people or even a larger workshop. Testers should be people from different disciplines, with different level of expertise, scientists, regulators and so on. My best and most educating experience in testing and doing actual risk assessment is from this type of settings. The goal must of course be clear, the testers should be guided to stay focused, and as has been pointed out both the testing and gathering of results should be done with well-focused questions. We already have a set of questions from 2011 and these could be developed further. As has been also suggested, this work could be undertaken by the Secretariat. This type of working/testing and results from there-of cannot be achieved by merely questionnaires.
These testing “groups” can and hopefully will be national, sub-regional, regional etc. I already can think of several groups both in Finland and the EU that could perform this type of testing in a face-to-face setting. E.g. back-to-back to their ordinary working meetings.
I think it is very important to have a more interactive round of testing than last time. Let’s stay open-minded and constructive. Looking forward to get started!
Marja Ruohonen-Lehto Finland Jan 25, 2013
posted on 2013-01-25 17:25 UTC by Dina Abdelhakim, SCBD
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4085]
Good morning from Mexico! I am very sorry I haven´t been able to keep up or participate very actively of this second round of discussions on the testing of the guidance on risk assessment. But, I would like to at least contribute in some way before the forum is closed today. In that respect I will share some thoughts on the questions raised: I believe that "testing the guidance" should be a flexible enough process that can suit any and all cases. This means that I do not think that specific tools must be developed, each and every user of the guidance could do it in the way that best suits his/her/their particular circunstances to do so. In our particular case, for example, we (as a government office that regularly does risk assessment to provide competent authority some elements for the decision making process) will "use it in some real cases" (the ones we now are processing) and report on the usefulness (including clarity, omissions, etc) of the guidance with respect to our daily work. Others (with other backgrounds/duties/etc) could use the guidance in very different ways. What is key, is to have a very general set of questions (at the end) related to the outcome of the "usage process" that could give light to this group and the Secretariat of the relative easiness/usefullness/completeness/clarity (and a big etc) in relation to the whole guidance. It would be very useful that those who report back indicate their level of experience with risk assessment, what documents they used as the actual real cases (some could be provided by the Secretariat for those who do not have real cases at hand), and in what "context" was the testing done......this could all help giving "focus" on the answers given.
I hope I was clear enough.
Un abrazo, Francisca
posted on 2013-01-25 17:36 UTC by Ms. Francisca Acevedo, Mexico
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4091]
As a final comment on some of the postings over the last day I would like to add support to the following submissions.
Thank you Francisca [#4085] - very clear and very sensible!
I also support Adriana [#4086] in terms of proposing a practical approach.
The questions proposed by Jelena [#4080] are useful as are those proposed by Hiroshi [#4083].
I also support Dave Heron's comments [#4089] and believe that the detailed set of specific questions that he has presented should be given careful consideration. I would like to thank him for clarifying the nature of a 'tool' which I now interpret as a set of material/information that can be used to support testing and also used as a yardstick for evaluating.
I also agree that it is important to have good translations available and recognise the difficulty in ensuring consistency in interpretation.
regards janet
posted on 2013-01-25 20:50 UTC by Janet Gough, Environmental Protection Authority
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4086]
Dear colleges, Thank you again for the opportunity to participate in this discussion about the Road Map and the guidance. I don’t want to repeat with all the participations then my suggestions are very punctual. In my opinion the best way to test it can be done by the evaluators in each country. I don’t think can be useful to have it in workshops because it may do the test very artificial. I think it will be better and more useful to test it with the dossiers of the events that are submitted for each different environment of release. As the expertise in different institutions and entities on the Parties are different we will have a mosaic of users and a more real test of the road map and the guidance. I think a good questionary and analysis of the answers will be essential to this test. The best to all of you.
posted on 2013-01-25 17:50 UTC by Dr. Adriana Otero-Arnaiz, Mexico
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4087]
Hello everyone,
It is a concern that there still appears to be different opinions among the online participants about what the ‘purpose’ of the guidance is. Clearly this question must be resolved before the guidance can be tested. I thought the point of the guidance is to assist the user to produce a risk assessment consistent with Annex III of the protocol.
I would like to offer an opinion on several debates I see in this discussion about different approaches to the testing:
1) Should the outcome of the testing be captured in the form of a survey or questionnaire? I think that it would be most informative to include both a questionnaire and the assessment produced as part of the testing process, and that the answers to the survey and the assessment produced should be analyzed as an indication of the usefulness of the guidance. As some have mentioned, we have used a questionnaire in the past, and the results were not particularly informative, so perhaps this should not be the sole source of evaluation.
2) Should the test be administered to individuals or to a group as part of a workshop or meeting? Again, I think the answer should be that both approaches are allowed and encouraged, as different information can be gleaned as a result of these different approaches. Can an inexperienced risk assessor navigate the guidance without support? Of course it will be important to include information about how the test was administered in every case, and if it is in the form of a workshop, then who organized, and paid for, the workshop, and what format did the workshop take? Was background information presented as part of the workshop? Who were the presenters and the resource people? If individuals test the guidance, did they use any additional guidance, person, or organization as a resource? I wish to agree here with the comment by Janet Gough that ‘care should be taken that participants should not all come from similar backgrounds or perspectives’, and if individuals or organizations serve as a resource or organizer of a workshop during the testing, there should be an effort by the Secretariat to ensure that these participants are ‘representative’ of different perspectives, otherwise the test results are likely to be skewed.
3) Should the test-takers be experienced or inexperienced risk assessors, and should they include others? The test-takers should at least include both experienced and inexperienced risk assessors, and should not exclude others, and it should be clearly indicated with the results what the experience level of the test-takers is.
4) Should actual cases or hypothetical cases be used when testing the roadmap, and should a common set or random cases be used? It doesn’t seem possible to test the guidance without case studies, so some case studies will have to be used and this becomes a critical question. The information provided in actual cases is the information that has been used for actual risk assessments, so these seem like meaningful cases to use to test whether an assessment can be produced that is consistent with Annex III. If hypothetical cases are used, there will need to be some way to evaluate the quality of those case studies. What are the criteria for this? I also agree with a few in this discussion who have already suggested that there should be at least some common cases used in every test, although other, different cases could be used in addition. I think it would be very difficult to compare the results if common cases are not considered. I also agree strongly that the guidance should be tested in at least one case of a field trial and one of a general release.
Finally, I am happy to see specific questions have been suggested for the ‘questionnaire’ by several of you, and I hope some of the questions I have included in my comments above can be added to this list. I would also agree with a request from Andrew Roberts that ‘following the conclusion of these discussions a plan for testing of the guidance should be drafted and circulated’, and I would hope the list of questions for the survey could be shared for input from this group before it is finalized. This should greatly increase the chances for a successful testing of the guidance.
Thanks very much for the opportunity to comment.
Karen Hokanson Program for Biosafety Systems University of Minnesota
posted on 2013-01-25 18:11 UTC by Dr. Karen Hokanson, Agriculture and Food Systems Institute
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4089]
Dear participants:
A number of comments have added very practical suggestions to provide specific questions for a questionnaire to guide the testers of the Guidance, and I would like to offer some additional suggestions here. Some of the proposed questions are designed to acknowledge that there will be a range of experience among the testers, and the questionnaire can be constructed to capture this information as part of the analysis of the results of the testing process. In the early days of this forum we wrestled with what was meant by “tools” for the testing, and it seems to me that perhaps the tools needed for the testing might include the following:
TOOLS FOR TESTING: 1. The questionnaire (some more suggested questions are below). 2. A copy of Annex III of the Cartagena Protocol 3. A copy of the decision of MOP6 related to testing the Guidance, 4. A copy of the revised training manual on risk assessment (to evaluate complementarity with the Guidance). 5. People who are involved in risk assessments of LMOs related to biosafety (environmental safety) and willing to participate in the testing.
I agree with the point made in Posting #f the testing is to meet the mandate from the Parties, it seems wise to strive for a questionnaire that provides for “robust analysis” of the testers’ impressions of the Guidance. In order to do that, it will be useful to address the specific areas in the mandate in the MOP6 decision on risk assessment plus gather information about the people who are doing the testing, especially in the areas of previous experience in doing risk assessments.
With this in mind, I offer for consideration the following questions for the questionnaire, much in the spirit of the constructive, concrete questions that were suggested in Post #4080:
QUESTIONS TO GATHER INFORMATION ABOUT THE TESTER(S): 1. Prior to participating in this testing exercise, how many risk assessments of LMOs you have done as part of your government’s implementation of the CPB and/or your national regulatory regime. (none; 1-5; 5-10, 10-20; 20-50; more than 50). 2. In which context is your experience in risk assessment of LMOs? (Academic; Governmental; both academic and governmental; other). 3. Prior to participating in this testing exercise, for which types of LMOs have you conducted risk assessments? (LM herbaceous plants; LM trees; LM plants with stacked traits; LM plants modified for tolerance to abiotic stress; LM mosquitoes; LM fish; LM viruses; other). 4. Prior to participating in this testing exercise, in how many cases of environmental releases of LMOs have you previously engaged in monitoring of LMOs as described in Annex III of the CPB? (none; 1-5; 5-10, 10-20; 20-50; more than 50). 5. Please indicate in which capacity you are conducting this test of the Guidance (Party to the CPB, other non-Party Government; relevant organizations)
QUESTIONS TO TEST THE GUIDANCE ON RISK ASSESSMENT: The questions are developed to take into account the decision of the Parties at MOP6: 1. To what extent does the Guidance document make clear that “The Guidance is not prescriptive and does not impose any obligations on Parties”? (Very clear; somewhat clear; somewhat unclear; very unclear) 2. Are you conducting this test of the Guidance in actual cases of risk assessment and in the context of the Cartagena Protocol on Biosafety (yes; no; uncertain) 3. Are you conducting the testing of the Guidance in its English language version or a version translated into another language (if so, please indicate the language and provide a copy of the translation that was used) 4. Are you conducting the testing of the Guidance in an actual cases of risk assessment (yes or no) and if so, please describe the case(s) used (e.g., small-scale environmental release of LM rice modified for resistance to bacterial blight, large-scale release of LM virus for rabies control in populations of wild mammals; small-scale release of LM mosquitoes modified for sterility; large-scale release of LM cotton modified for resistance to lepidopteran insects, etc.) 5. In the case being used for testing the Guidance, what parts of the Guidance were used? (the entire Guidance; the first main section; 6. In testing the Guidance with an actual case, please indicate your impression on the practicality, usefulness and utility of the Guidance, (i) with respect to consistency with the Cartagena Protocol on Biosafety; and (ii) taking into account past and present experiences with living modified organisms; 7. In testing the Guidance with an actual case, to what extent is the Guidance consistent with the Cartagena Protocol on Biosafety? (Very consistent; somewhat consistent; somewhat inconsistent; very inconsistent). Wherever possible, please indicate the text in the Guidance that supports your answer. 8. In testing the Guidance with an actual case, to what extent or how well does the Guidance take into account past and present experiences with LMOs? (very well; somewhat well; somewhat poorly; poorly). Wherever possible, please indicate the presence or absence of specific reference documents cited in the Guidance that were relevant to the actual case used to test the Guidance. 9. Please characterize the quality and relevance of the background documents to the Guidance, citing wherever possible the relevance to the actual case(s) used for testing the Guidance. Please indicate any references that are inappropriate as well as any references would be appropriate and suitable for inclusion among the background documents to the Guidance. 10. Please describe what possible improvements to the Guidance you would like to have included in the report for consideration by the Conference of the Parties serving as the meeting of the Parties to the Protocol at its seventh meeting. 11. To what extent does the Guidance complement the revised training manual on risk assessment? Wherever possible, cite the specific sections of the Guidance and the revised training manual on risk assessment.
These questions may be just the start of a questionnaire for testing, but I hope they give a flavor of how the questions and testing can be developed in a way consistent with the specific mandate of the Parties related to risk assessment. I look forward to other ideas as this important work of testing the Guidance progresses.
Thanks,
David Heron Biotechnology Regulatory Services USDA-APHIS
posted on 2013-01-25 19:27 UTC by David Heron, United States of America
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4092]
Dear all
In my view Sarah (#4090) raises a very important point that I’d hate to see get lost. Some in this forum certainly have given me the impression that they have their minds made up already that the Guidance is not useful, to them and perhaps by extension to anyone else. This is unfortunate and could compromise the testing. The task given by the MOP was to improve the guidance through testing, not to decide whether it is useful for everyone. It may not be useful to some but through no fault of the Guidance.
I like the way that Jelena posed her nominated questions (post #4080). Her questions don’t just ask someone to declare support for or against something, but to say why they felt that way or to suggest how the Guidance could be improved.
This contrasts with Dave’s questions (#4089) which I believe could be much improved by following Jelena’s approach. For example, question 9 “Please characterize the quality and relevance of the background documents to the Guidance, citing wherever possible the relevance to the actual case(s) used for testing the Guidance. Please indicate any references that are inappropriate as well as any references would be appropriate and suitable for inclusion among the background documents to the Guidance.” would become more like this:
*Please characterize the quality and relevance of the background documents to the Guidance, keeping to the references that were relevant to the actual case(s) used for testing the Guidance. Please state why you think any references were helpful or not, what references you thought might be more useful, and why.*
Questions such as Dave’s #8 are in my view inappropriate for testing. How can the Guidance, which itself is not a risk assessment on a particular case, take into account past and present experiences with LMOs in general? What this question is asking seems to me to be whether or not the Guidance conforms to the manner in which particular countries have previously done risk assessments in a hypothetical context. That more trivial question could be reserved for those countries to answer, if it were found to be useful information to have.
I would suggest that this and similar questions be reworked, such as in this case:
*In testing the Guidance with an actual case, please describe to what extent or how well the Guidance provided you as a risk assessor with confidence in your assessment? (very well; somewhat well; somewhat poorly; poorly). Are there parts of the Guidance that alerted you to gaps in information that contributed to your findings?*
Again my very best wishes to the members of the forum Jack
posted on 2013-01-25 21:34 UTC by Mr. Jack Heinemann, University of Canterbury
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4088]
POSTED ON BEHALF OF HECTOR QUEMADA ------------------------------------------------------
I have been following the online forum with interest, and would like to offer comment on a few of interventions made previously.
I would like to support Piet van der Meer's suggestions for questions to be used in the testing. I am especially concerned about the two main questions and their supporting sub-questions. They are the two most important considerations: whether they truly achieve the original purpose of providing guidance on implementing Annex III, and whether they are useful in particular to inexperienced risk assessors.
I also agree with the comments of Maria Mercedes Roca, calling for good translations in languages other than English. The Roadmap will only be helpful as long as it is a usable document by others. As Maria has made us aware, there are nuances in other languages, Spanish for example, that do not translate easily between from English, and vice versa.
Finally, I respectfully disagree with Wei Wei on the use of academic risk researchers to review the Roadmap. The standard against which the Roadmap must be judged is whether it is truly helpful in practical implementation of Annex III, and not whether it addresses all of the questions that one might ask from an academic perspective.
Hector Quemada Donald Danforth Plant Science Center
posted on 2013-01-25 19:20 UTC by Dina Abdelhakim, SCBD
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4090]
Dear Hans and members of the forum,
Again, in regards to the question of who the target audience should be, I would like agree with Jack’s post (#4078).
I would also like to add information about the previous testing of the guidance which contradicts Geoff (post #4077) who said “To include anyone else is likely to tell us what we already know – those not routinely involved in doing risk assessment do not understand the guidance”. In fact, questionnaire results from the previous testing shows that less experienced countries (e.g. Bolivia, Cuba, Somalia, Slovakia) did think that the guidance was useful and user-friendly. They did understand the guidance and found it useful and therefore comments such as theirs should be used. On the other hand, more experienced countries (e.g. Brazil, US, Germany) found that they did not need the guidance. It might not be useful to them for all sorts of reasons that are not more important than the reasons it was useful to other countries.
Although I find that the term "more and less" experienced is a bit biased and depends on what we think are the proper conclusions of an RA, let’s take it as the number of times these countries have performed any type of RA as said by Geoff "routinely involved in risk assessment."
Moreover, I am a bit confused by Deise’s post (#4073) in regards to “each of the Parties (experienced or not) will involve experienced decision-makers and other practitioners as much as they feel it is appropriated to their process of decision”. I think that the participation of all interested groups should be assured and not be kind of ‘chosen’ by regulators or national competent authorities.
Best regards,
Sarah Agapito
posted on 2013-01-25 20:30 UTC by Dr. Sarah Agapito-Tenfen, NORCE Norwegian Research Centre
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4093]
Dear Participants I want to thank again Hans for clarifying many key things, and to all the participants for sharing their views. I share the opinion of many on using a survey /questionnaire as the tool for documenting the testing, in this sense I also agree with the recent posting by Janet [[#4091], and beside supporting comments by Francisca [#4085] and Adriana [#4086] in terms of a practical approach. I think that questions proposed by Luciana [#4069], Jelena [#4080], and Hiroshi [#4083] include good suggestiong for inclusion in the questionnaire. I particular, David Heron [#4089] questions, considering the two types, questions to test the guidance and questions to gather information about the tester and the context, are extremely useful. I also agree on the complementary tools proposed by David , 1 to 5, from his posting [#4089] , but I would add 6) access to the background documents linked from the guidance, since we would like to test that componenet of the guidance as well. Finally a quick reaction in relation to the workshops. Considering Hans' definition on workshop in our context: “a meeting of a few people that work together hands-on on a project: the testing” I agree that this could be a good method (ok it's not a tool) to test the guidance. Meeting of few people involved in risk assessment is the clue. Big workshops would have another propose (capacity building) and are mandated in another decision.
When people work together it is very useful for exchange ideas and reach an understanding, but also is useful for clarify things that lack of clarity (in a way we are doing this by exchanging ideas in this forum). The people working together to test the guidance will need to make sure to get a clear idea on where exactly the guidance lacks clarity instead of exchanging ideas that will make clear something that is not. Improving the Guidance will be done as a next step, first we have to know what should be improved.
After two weeks of a very intense exchange of ideas I think that there are getting material to work on, and advance to fulfill the mandate of the COP-MOP 6. Kind regards Sol
posted on 2013-01-25 22:40 UTC by Ms. Sol Ortiz García, Mexico
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4094]
Dear Hans
Thank you for your efforts to direct this discussion.
What are we testing for?
Referring to point 5 of Decision BS VI-12, we should test for a) practicality, b) usefulness and c) utility of the Guidance, (i) with respect to consistency with the Protocol; and (ii) taking into account experiences with LMOs.
Whether people want to gather in multi stakeholder workshops, small groups of colleagues as Hans says or is an individual work it is the decision of each, but all should use the same reporting tool, we should allow flexibility as Francisca says [#4085]. I agree with many that the best tool to collect information on the testing is a questionnaire. For it to be useful, it will need to be based on free text and not yes/no answers. This will not make of an easy tabulation and analysis of the answers but otherwise we will have a questionnaire of limited utility as it has been in the previous testing of the Guidance document.
When evaluating the Guidance, all its sections should be addressed. As Sol has said different types of environmental releases (confined, unconfined) as well as different types of LMOs, not just herbaceous plants should be considered.
It is important to know the role and experience level of the responder to put answers in perspective.
When answering whether sections of the Guidance are useful or not, the responder needs to indicate why.
I think Piet [#4059], Vila [#4050] on clarity of language, Maria Mercedes [#4043] clarity of language that would facilitate translations, Katherine Bainbridge [# 4044], and others have already suggested questions that are important to include. I agree with a request from Andrew Roberts that ‘following the conclusion of these discussions a plan for testing of the guidance should be drafted and circulated’, and I would hope the list of questions for the survey could be shared for input from this group before it is finalized. This should greatly increase the chances for a successful testing of the guidance.
Esmeralda Prat
posted on 2013-01-25 23:18 UTC by Ms. Esmeralda Prat, CLI representation
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4095]
POSTED ON BEHALF OF HANS BERGMANS --------------------------------------------------
Dear Participants of the second discussion in the Open-ended Online Forum:
I would like to thank all the participants of the Forum for their zeal and for their efforts to contribute their ideas and their comments.
Now that this forum is closed, I will try to make a summary of the main points of the discussions, which will be made available to you shortly as the final posting in this discussion.
This has been a very lively and very useful discussion, thanks to all of you. As I said before, it has been an honor and a pleasure moderating this discussion, that continued the enthusiasm of the previous round of discussion. I can only hope that this enthusiasm continues in the future endeavors of the Open-ended Forum
Best regards,
Hans Bergmans
posted on 2013-01-26 00:05 UTC by Dina Abdelhakim, SCBD
|
RE: Opening of the discussion group on the development of appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms.
[#4138]
POSTED ON BEHALF OF HANS BERGMANS ---------------------------------------------------
Dear Participants of the second round of the Open-ended Online Forum,
Thank you very much once again for the efforts you took to contribute to this very lively forum discussion.
The task of this forum discussion, as outlined in paragraph 1(a) of the annex to decision BS-VI/12, was to “Provide input… to assist the Executive Secretary in his task to structure and focus the process of testing the Guidance…”, in particular to assist in the development of “appropriate tools to test the Guidance on Risk Assessment of Living Modified Organisms” as per Paragraph 5(a) of the decision.
To initially clarify the question that arose from the discussions of “What are we testing for?” It was apparent from the discussions that the concept of testing the Guidance was interpreted differently by different participants. By extension, the various testers may also initially have very different interpretations. It must therefore be made quite clear to all testers what exactly they are being requested to do when they are testing the Guidance in actual cases of risk assessment.
As per Paragraph 5(b) of decision BS-VI/12 which states that “…testing [will be] on the practicality, usefulness and utility of the Guidance, (i) with respect to consistency with the Cartagena Protocol on Biosafety; and (ii) taking into account past and present experiences with living modified organisms”. Thus testers are not requested to show their ability to perform risk assessment nor to come to a specific conclusion. They are asked for their opinion on the usefulness and utility of the Guidance. If they perform a risk assessment in the testing process, the purpose is to find out if and how the Guidance has helped or hindered them in their work.
Therefore, the “tools” that are to be developed are not tools to help testers perform a risk assessment, but tools that explore the qualities of the Guidance. The tools should streamline the discussions as well as the reports of the testers so that clear conclusions can be drawn for improving the Guidance as appropriate.
Additionally there was a strong call from Parties, as well as by other Governments and relevant organizations, that the format of the testing must be flexible enough that they can adjust the testing to their own situation and needs. Let me attempt to summarize the comments that relate to the guiding questions:
1.What, in your opinion comprises a tool to test the “Guidance on Risk Assessment of Living Modified Organisms” in actual cases of risk assessment?
As previously noted the tool needs to structure the process of gathering information on the qualities of the Guidance from the testers. There was some level of agreement on a questionnaire approach. The majority of participants were agreeable to using a questionnaire because it would provide a clear and concise method of data gathering that is relatively standardized and will allow greater accessibility to produce a meaningful report on the usefulness and utility of the Guidance. It was stressed, however, that for the questionnaire method to be successful it should pose open questions, which demand explicit reasoned answers and comments for the improvement of the Guidance. Other participants suggested that a questionnaire was not the best approach due to the inherently prescriptive and isolated nature of a questionnaire, which may not fully capture where the Guidance can be improved upon.
Workshops were also suggested as a method to perform the testing. In this context, a ‘workshop’ is meant as a meeting where the testers will have the opportunity to directly discuss case studies. Such a meeting would allow for a more explorative and free exchange of views. The workshops may range from a small group of people within one country to a larger regional meeting. It was however pointed out by several colleagues that the costs involved might be prohibitive to the holding of such workshops. Additionally concerns were raised as to who would attend such workshops and that every effort must be made to ensure the participants are representative of the different backgrounds of the various stakeholders that are involved in the risk assessment process of the Party, and that are to be involved in the testing.
The Forum also suggested that an optimal scenario may require the use of a combination of the various complementary methodologies: questionnaire for focusing the discussion and harmonizing the report of the testing and a workshop setting for performing the testing in actual cases of risk assessment.
There was a general agreement that regardless of the methodology used to gather the data that the testers must be encouraged to use various actual cases of risk assessment for the testing process. These include cases that cover small and large-scale applications, field trials, confined and unconfined releases and cases that are regionally varied and appropriate. Such cases can include surveying dossiers that were recently assessed and submitted to the BCH.
2.What specific tools for testing would you recommend for each of the different parts of the Guidance? and 3.How can these tools be modified to capture specific information on each of the three sections of the Guidance?
In response to these two questions participants indicated that no specific tools beyond those already specified would be needed. However suggestions were made that when testing the guidance testers can compare and contrast the introductory sections, on overarching issues and the planning phase from the Roadmap, with the content in their own legislation and guidance on performing a risk assessment, where applicable, and provide feedback regarding any deviations or harmonization between the two.
Additionally the testers can be encouraged to look at actual cases of risk assessment that involve specific organisms and traits for which there is Guidance with a view to try to test the whole Guidance. A key question for testing the different parts might be whether or not the different sections of the Guidance are complementary with the Roadmap.
4.How can the tools be adapted to accommodate the broad sets of experiences from the various target groups?
The COPMOP decision BS-VI/12 paragraph 3 “Encourages Parties, other Governments and relevant organizations, through their risk assessors and other experts who are actively involved in risk assessment, to test the Guidance in actual cases of risk assessment and share their experiences through the Biosafety Clearing-House and the open-ended on-line forum.” As such there are various target groups that can participate in the testing process. In the discussion it was pointed out that various “other experts”, such as scientists, may be involved, in as far as they are involved in the LMO risk assessment process their country.
With regards to adapting the testing methodology to accommodate for the varying levels of experience of testers the general feeling was that it would not be useful to make any adaptations to the questions themselves. However, suggestions were made that data should be collected to indicate the testers’ levels of experience and possibly provide those less experienced testers with less complicated cases to use to carry out the testing. Such questions would contextualize how the testing was performed and by whom. It was noted that questions should be included in the questionnaire for an appropriate interpretation of the results of the testing and comparing the results of the various testers.
5.In what form should this tool be presented to the target groups that will carry out the testing?
This has been covered in the previous responses: a questionnaire complemented by a workshop, and using actual cases that may be provided for testing.
Several participants provided a number of excellent concrete suggestions of possible questions for inclusion in the questionnaire.
It was a pleasure to be a moderator of this forum.
Best regards, Hans Bergmans, moderator of the discussion on Topic 2 in the Open-ended Forum
posted on 2013-02-04 16:25 UTC by Dina Abdelhakim, SCBD
|
|