Sub-Working Group on the Study of Social Rules for automated driving vehicles in age of AI (First)
Overview
- Date and Time: December 25, 2023 (Mon) from 16:00 to 18:00
- Location: Online
- Agenda:
- Opening
- Content
- Explanation by the Secretariat (regarding the background and purpose of the Review Meeting, confirmation of the current location, comparison of overseas systems, expected issues, etc.)
- Exchange of opinions
- Adjournment
Materials
- Handout 1: Agenda (PDF/67KB) (updated January 11, 2024)
- Exhibit 2: List of members (PDF / 112 kb)
- Exhibit 3: Secretariat explanatory material (PDF / 4,289 kb)
- Reference Material 1: Meeting of the Sub-Working Group on Review of Social Rules for automated driving vehicles in age of AI (PDF / 94 kb)
- Data submitted by members:
- List of participants in the first age of AI Sub-Working Group on the Study of Social Rules for automated driving vehicles (PDF / 81 kb) (updated May 20, 2024)
- Proceedings (PDF/489KB) (updated May 20, 2024)
Minutes
Mr. Hasui: First of all, I would like to make an administrative communication. Today's meeting will be held completely online. Please make sure that all members are camera-on during the meeting and that you unmute the microphone when you speak. Please mute the microphone when someone else is speaking. In addition, please turn off the camera and microphone for the audience. Next, I will check the materials. As stated in the agenda sent in advance, the materials will be the agenda, the names of the members, and the materials to be submitted by the members of the secretariat. If you do not have them, please contact the Teams chat function or the secretariat by email. Due to time constraints, I would like to replace the introduction of the members of this group with the composition of the list of members in your possession. Regarding the status of attendance today, Members Goto, Suda, and Fujita are scheduled to attend the meeting in the middle of the meeting. In addition, I have heard that Member Suda is scheduled to leave the meeting in the middle of the meeting. Please note that the materials and the minutes of the meeting will be disclosed at a later date as a general rule. That is all for the administrative communication. Then, I would like to ask Mr. Kozuka, Chief of Staff, to proceed. Mr. Kozuka, Chief of Staff, thank you.
Mr. Kozuka: Thank you very much for your comments, Mr. . I will proceed according to the agenda. First of all, I would like to ask Mr. Kono Minister for Digital Transformation to give an address at the opening of the sub-working group. Mr. Minister Kono, thank you for your cooperation.
Minister Kono: Thank you for joining us today, despite being busy at the end of the year. With regard to automated driving, I believe that automated driving is extremely important for solving the social Issue that Japan faces, not only for improving traffic safety by reducing traffic accidents, but also for resolving the shortage of drivers by making unmanned driving possible. In particular, amid the rapid decline in population, there is a shortage of workers, and the shortage of drivers for area organizations is becoming serious, mainly in depopulated public transportation. In order to solve such Issue and build a society in which the residents of area can live a convenient and prosperous life in each area, it is necessary to make the most of Japanese technological capabilities and to make automated driving a social implementation as early as possible in the practical stage, rather than technological demonstration. In addition, I believe that the technology of automated driving will be extremely important in the automobile industry in the future, and that automobiles and automated driving can also be in a relationship in which the core value is not the hardware of the personal computer but the software that runs it. At the Digital Administrative and Fiscal Reform Conference, Prime Minister Kishida also instructed us to accelerate the necessary environmental development for the active commercialization of new automated driving services, including social rules for mobility. One of the necessary environmental arrangements is the social rules for automated driving. It has been pointed out that when accidents occur due to an unmanned automated driving vehicles, there is no predictability as to who will be held responsible and what kind of responsibility will be imposed under the existing system that assumes the presence of a driver. Therefore, people who are involved in automated driving operations in various ways are afraid of being held accountable, and this is a major risk for business operators to participate in the operation of the automated driving. In fact, it was pointed out that automated driving vehicles had caused an accidental contact in the Olympic Village for the Tokyo Olympic Games, and that the pursuit of responsibility for that accident had been going on for an extremely long period of time, and it has already begun to atrophy. If this continues, practical automated driving technology will not grow in Japan, and domestic technology will disappear. I believe that the disappearance of domestic technologies, in other words, technologies developed in automated driving, could have an extremely large impact on the automobile manufacturing industry, which is one of the pillars of the Japanese economy. In that sense, I believe it is no exaggeration to say that the content of the discussions to be held in this Subworking will not only affect the future spread of automated driving, but also the future of the Japanese economy. Aside from the future, first of all, at this point today, I would like to have discussions centered on the perspective of what kind of rules are necessary and what should be done in order to ensure the prompt social implementation of automated driving domestically, with the major premise of providing sufficient relief to the victims while giving maximum consideration to safety. It is conceivable that another rule will be created after the implementation of automated driving in society and the spread stage is over, but today, at this point, I would like to have a deep discussion on how to quickly advance the social implementation of automated driving domestically. In addition, I would like you to draw a conclusion on social rules and the development of an environment surrounding them so that business operators can actively engage in the industrialization of new technologies, such as the establishment of a mechanism to prevent accidents by collecting, analyzing, and utilizing data, and various efforts to increase predictability. I think this is an extremely important discussion for the future of Japan, so I would like to ask for your kind cooperation.
Mr. Kozuka: Thank you very much for your comments, Mr.
Counsellor Hitoshi Suga:
(Explanation based on "Material 3: Explanatory Materials for the Secretariat")
Material 3: Based on the Secretariat Explanatory Materials, the background and purpose of the Review Meeting, confirmation of the current location, comparison of overseas systems, expected issues, etc. are explained.
Mr. Kozuka: Thank you very much for your comments, Mr. Thank you very much. Thank you for your informative explanation in a short time. Then, I would like to proceed to the exchange of opinions on Agenda 3. Since this is the first session, I would like to hear the opinions of each member. In addition to your opinion on the explanation by the Secretariat, your opinion on the social rules of automated driving vehicles in age of AI is also acceptable. First of all, I would like to confirm the construction, but I understand that after the discussions of this sub-working group are summarized, it will be reported to the Digital Administrative and Fiscal Reform Council and the mobility Working Group. I would like you to understand this. In addition, based on the report of this sub-working group, the mobility Working Group will set a time frame for the review and request the relevant ministries and agencies to conduct the necessary review. Today, first of all, I would like you to point out what problems exist and what should be dealt with. There is a diagram of legal civil and criminal liability, etc. in the middle, and lawyers immediately start discussing whether there may be such a problem or whether such a problem could be realized. However, this is not the case today. I would like you to point out that there may be some omissions in the explanation by the Secretariat so far. We would appreciate it if you could make your comments in consideration of the time frame, such as what can be solved in the short term, or what is difficult to solve in the short term but should be solved in the medium to long term. We have limited time, so we would like to receive your opinions in order of the list. We would like to set a time of about 4 minutes per person. In the order of the list, Mr. Inadani is at the top. Then, could you wait for about 4 minutes?
Inadani Member: This is Inadani . Best regards Prior to the opening of this Sub-Working Group, I have submitted a written opinion and a separate sheet explaining the background and reasons for the opinion. Please refer to the attached sheet for details, and I will briefly explain the main part of the written opinion. The outline of my opinion is that in order to respond to the information asymmetry between the regulation side and the regulation side and the obsolescence of the regulation, which are problems in the legal governance of advanced science and technology, the legal system as a whole should be developed so that it can evolve and respond quickly to ever-changing risks while utilizing the ingenuity and innovation of the private sector side in accordance with the concept of agile governance. First of all, regarding the administrative regulation, we propose to ensure the security of the private sector by utilizing the ingenuity and innovation of the automated driving vehicles side by using a certification system using performance regulations. However, as has been a problem recently, the introduction of such systems increases the risk of moral hazard on the regulation side. From the perspective of providing incentives to ensure the provision of information necessary for the update of the legal system while preventing this, we have also made recommendations on desirable civil and criminal liability systems and sanctions systems. Regarding the civil liability system, from the perspective of effectively granting incentives to parties who can manage risks arising from AI systems to appropriately and continuously manage the risks, including consideration of whether the concepts of negligence, defects, and obstacles in the current law are consistent with the characteristics of AI that behaves stochastically and the characteristics of AI systems that cause problems due to interactions between multiple components, the Company proposes that the Company proceed with consideration of the desirable form of the Automobile Liability Act, the Product Liability Act, and general civil liability laws related to AI systems in general, with a view to adopting strict liability for compensation, etc. Next, regarding the criminal liability system, in order to eliminate uncertainties associated with the difficulty of appropriately interpreting and operating the concept of negligence in the current law for AI systems, for example, by operating the legal theory of allowed risks in a manner that can be institutionally justified in combination with performance regulations, etc., it is proposed that consideration be made so that development operators, etc. do not shrink excessively due to fear of personal liability. In addition, since the security of products and services provided by companies is often caused by structural problems such as governance and compliance malfunctions caused by the corporate culture of the company, we have proposed that it is necessary to effectively provide incentives for companies to continuously maintain and improve the security of complex AI systems such as automated driving vehicles by developing strict sanctions systems against companies that have supplied organization, etc. that do not satisfy the required level of safety-related performance regulations, etc., companies that have made false declarations in safety-related performance certification, etc., and companies that have refused to cooperate with or obstructed investigations in the course of reporting on safety-related investigations, etc. We have also proposed that it is necessary to introduce so-called deferred prosecution agreement systems and whistleblower incentive systems, etc., and that it is necessary to effectively provide incentives for companies to continuously maintain and improve the security of complex AI systems such as automated driving vehicles by developing procedures for competent authorities to postpone prosecution when companies subject to sanctions promise to voluntarily provide necessary information on the security of products and services by companies that have detected signs of problems or have had accidents with products and services related to their own companies, provide information on malicious individuals if they exist, and work on reform and product improvement, and to deploy such systems throughout AI systems in a manner consistent with incentives provided by civil liability. Regarding the security of AI systems in automated driving vehicles, etc., there are currently many related ministries and agencies, and it is difficult to handle them in a centralized and prompt manner. Therefore, regarding the promotion of security through responsible innovation regarding AI systems, it is also proposed that a government agency be created to handle security in a centralized manner, and that an effective security management and accident investigation system be developed by referring to cases in which the security of aviation systems, etc. has been continuously improved. In addition, regarding the collection of information on security in this automated driving vehicles, we believe that problems associated with the lack of human and material resources of competent government agencies and cross-border investigations should be resolved by providing incentives for companies to actively provide information to competent government agencies by utilizing systems such as the Deferred Prosecution Agreement. Finally, regarding the evolution of the entire legal system, with the aim of ensuring the security of automated driving vehicles and other AI systems throughout the legal system, in order not to hinder sound competition among companies, we also propose to introduce a mechanism in which necessary information on security, etc. is promptly shared to the extent necessary under the leadership of government agencies that supervise security after appropriate business consideration, and performance regulations, etc., and certification standards and methods can be reviewed as necessary. Many of my personal views are medium-term, looking two or three years into the future. However, I believe that the institutionalization of the interpretation and operation of permitted risks and the strengthening of the accident investigation system, in particular the introduction of a system of deferred prosecution agreement, or a system to increase the incentive for companies to sincerely cooperate with investigations in order to lead to the introduction of the system, should be realized as soon as possible. It's a little long, but that's all from me. Thank you very much.
Mr. Kozuka: Thank you very much for your comments, Mr. . Are you thinking of sharing this written opinion with the members from the secretariat later?
Counsellor Hitoshi Suga: is OK, I would like to make it public as well. I will share the link later.
Mr. Kozuka: Thank you very much for your comments, Mr. . Well, it seems that other teachers have also submitted their written opinions, so I would like you to share them after confirming with each teacher. I have just mentioned the order of the list, but some teachers will leave first, so I would like to change it a little. Could you please tell Mr. Suda at this stage?
Member: I'm Suda from University of Tokyo. I'm sorry I was late. I have to leave at five thirty, so I would like to explain it first. I myself am not a specialist in law, but a specialist in mechanical engineering, rather a person on the development side, on the research side. Until now, I have been in charge of accident investigation on the spot. First, I was involved in the investigation of the derailment accident on the Hibiya Line in March 2000 as a member of the investigation committee. After that, I have been in charge of the derailment on the Fukuchiyama Line as an expert member of the Accident Investigation Committee and the Japan Transport Safety Board. I believe that the most important and important concern is to identify the cause of this and to prevent a recurrence. Actually, an investigation committee has been established in automated driving as well, but it is not an independent organization at present. It is supported by the three bureaus of the National Police Agency, the Distribution and Motor Vehicles Bureau of the Ministry of Land, Infrastructure, Transport and Tourism, and the Road Bureau. In addition, it does not have the authority to conduct investigations, and although I was in charge of various cases the other day, I could not conduct specific hearings. Therefore, I believe it is extremely important to promptly establish a mechanism similar to the Japan Transport Safety Board. However, at that time, even if we can have a hearing, I believe that we need to take various measures to have the people tell us the truth and share the information with everyone. I believe that the testimony of the people concerned is extremely important, so I think it is important to make it clear that this is not a place to pursue responsibility. Although I am not a legal expert, I believe it would be very effective for us to establish a system of immunity from criminal liability based on certain rules. What I felt at the time of the accident on the Hibiya Line was that the people in charge of the accident were not doing their work with malicious intent, but were doing their work for the sake of the world and people. What happened around that time was the fabrication of ruins, and the fabrication of the ruins at that time was completely malicious, but it was not a criminal case at all, and I had the experience of feeling contradiction. Based on this, I believe it is important to consider how to ensure that there is no malicious intent. At that time, the technical person in charge of the subway was booked. Recently, there was an incident that a Seaside Line of a new transportation system in Yokohama City ran in the opposite direction and crashed, and some people got injured. At that time, in the end, it actually happened that the person in charge of the manufacturing company at that time and his boss were sent to the prosecutor's office. In that sense, I would like to see the establishment of proper rules for development in automated driving. In addition, from a long-term perspective, I believe it is necessary to have a rule that criminal cases should be imposed on corporations, not individuals. In addition, I personally believe that it is also important to create a unified law for the rules concerning automated driving, rather than having them under various laws. In addition, I believe that it is extremely important to provide relief to victims in order to enhance the social acceptance of automated driving. I believe that rules have already been established for this, and I believe that it is important to conduct educational activities to make it widely known to the people. I would rather take the position of the development side and the investigation side, but I have stated this as my personal opinion. Thank you very much. Best regards
Mr. Kozuka: Thank you very much for your comments, Mr. , thank you very much for taking the time to attend. I think it was a valuable point, but I will move on. Then, I will return to the list order. Mr. Imai, could you make a statement?
Member : I think you can share and view the PDF, so please let me know in this order. Since I specialize in criminal law and criminal law, I would like to express my opinions from that perspective.
In today's opening presentation, you said that there is no driver, and it is driverless. However, at present, although it is driverless in ODD, it is planned that people will be monitoring from a remote location, or people will be deployed to take on-site measures.
Therefore, in the process of becoming completely unmanned, the question is what kind of criminal liability should be pursued against the person involved. In addition, as you mentioned in your opening presentation earlier, when AI strengthens self-learning functions and becomes able to process data beyond the scope of the setting person, if it leads to an accident, who will be held accountable and how will they be held accountable? The overall composition is, in the short term, what kind of message will be sent to the people currently involved, and in the long term, what will happen when AI processes data across people and an accident occurs. First of all, as a prerequisite understanding, a figure is written on the material. As you know, automated driving in ODD is Level 3 and Level 4. In addition, those that are not ODD automated driving are Level 2 and Level 1. I think the focus of today's talk is Level 4 and above, but in order to understand Level 4, I think you need to check Level 3 first. I think level 3 is a combination of level 4 and level 2. That is, until the take-over request is made, the same driving as level 4 is performed even at the level 3 stage. After that, if a human overrides the driving behavior according to the takeover, it falls to level 2 or lower. What I want to say here is that I would like to leverage the discussion on Level 3 to date, which will also be useful when looking at Level 4 and above. As shown in the lower part of the document, there is a phase of transition from automated driving in ODD to Non automated driving in both Level 3 and Level 4. Even now, I think we shouldn't forget that there are people when we are operating remotely at level 4. If an accident occurs at Level 4, what kind of people will be punished is written in the document. For example, I think the specific automatic operation chief is functionally equivalent to a remote supervisor. Various people are involved in the realization of automatic operation at Level 4, including the person who made the car body of automated driving vehicles, the person who installed ADS there, the person who sold it, the person who made the algorithm of ADS, and the person who provides services related to telecommunications. Therefore, at this point in time, if a person is unfortunately killed or injured by a Level 4 vehicle, I think it will be a problem to determine whether or not there was negligence in determining what kind of duty of care such a person was paying. Specifically, it says that persons such as Specific Motor Vehicle Operation Supervisors are not currently certified as drivers in Japan, but if they are certified as drivers from the viewpoint of criminal law, it will be a problem whether or not they are negligent in driving resulting in death or injury. In addition, if they are not drivers, it will be a problem whether or not they are negligent in driving resulting in death or injury in the course of business because the crime of negligent driving resulting in death or injury mainly against drivers is not applied. The recognition of negligence itself is very difficult, but in addition to that, there is the problem of causality, and in the dilemma situation, when an accident occurs, if a smaller legal benefit is violated in order to protect a larger benefit, there is also the problem that the illegality may be denied by emergency evacuation. The problem of causality is the Tesla incident that occurred on the Tomei Expressway, which is Level 2. It is actually a problem there, but there has been little interest in the academic community, and no one has pointed out that the causal relationship was actually subtle even when looking at the papers dealing with the problem. This will also be related to the next long-term Issue, but when people are involved in AI and creating the behavior of automobiles, we must consider whether there was an illegal infringement of legal benefits before examining causality and negligence. There are still various points of contention in this area, and even within the Society of Criminal Law, there is no clear form. Recently, there has been a debate over whether the criminal liability of those involved can be reduced or eliminated if so-called ethical guidelines are established and followed. Next, please move on to that "short-term issue: scrutiny of technical guidelines, not ethical guidelines". My conclusion is that the Ethics Guidelines are very useful and intellectual work, but they are very prescriptive and use vague language, so it is not clear what the recipient should do. In addition, there is no guarantee that criminal liability will be reduced or exempted even if the guidelines are followed, so we should develop guidelines that help reduce criminal liability. I think such guidelines are technical guidelines. I believe that the purpose of any guideline is to provide direction for issues such as whether it is acceptable to protect the lives of people who have nothing to do with automated driving, or the lives of people who have something to do with automated driving at the expense of their bodies. First of all, it is important to take care of people's lives, and in relation to that, I understand that ethical guidelines will be discussed, but ethics are diverse. The role of soft law is very important, but for example, there are many bears that come out and cause trouble to people, and they are killed. There are various opinions about this, and there are various value judgments. I don't think it will be very effective for people involved in development in automated driving to summarize it as if it could have been organized only in normative terms. What I am thinking about is that based on Level 3 implementation, we should analyze the probability of accidents in ODD from statistical data, and if this area is dangerous or this kind of behavior is expected, for example, we should make a take-over request immediately, or we should not certify ODD. I believe that this is automated driving vehicles where engineers and lawyers should work together from the ADS setting stage related to Issue to show what can be done and how far it can be done using statistics. The details are written in the document. It also describes the permitted risks. Professor Inadani has just mentioned the utilization of this, and I have written this inappropriately in anticipation of the conclusion, but if the approach is to focus on performance, function, etc., as Professor Inadani said, I think it can be used. This theory of permitted danger also appeared in an early work by a scholar named Karl Engisch, and it seems that the influence of British action-utilitarianism can be recognized in its base. From that perspective as well, I believe that it will not be effective to advocate only a normative approach to age of AI. As for how to create the guidelines, the state of the art at that time is said in the Product Liability Law, etc., and based on this, I think it would be good to have the content that helps us to calmly investigate to what extent measures were taken. As a premise, as described in the clarification of the actual situation of the permitted danger theory, I believe that a strict interpretation theory must be premised. As a long-term Issue, black boxes appear in various places when AI goes beyond Level 4 and reaches Level 5, or when AI leaves the hands of human monitors in ODD even at Level 4 and configures new behavior based on data processing. When an accident occurs, for example, there are many connected cars running at that time, so it is very difficult to determine where the data from which car was processed to select such behavior. Causality will not be able to be certified using current criminal law theory. Negligence cannot be dealt with by normative thinking. If we do not assume what should be done based on technical considerations, such as what kind of data processing is possible, how far we have thought about it, whether we have taken a buffer, and what we will do if it is disconnected, engineers will be afraid and will not know when they will be punished for negligence or fraud. I think the lawyer should calmly talk with the engineer and reconsider the way of negligence. In addition, Dr. Suda mentioned corporate punishment a little earlier, and although it may be an extension of that, there is also a story that if AI is processing data by one person, it may be good to punish AI.
In footnote 1, there is a report I made as a representative of Japan at the International Criminal Law Society this year. At the International Criminal Law Society, the study of the corporate nature of AI and how to understand it as a criminal entity has just begun, but progress is probably rapid, so I believe that we should consider how criminal punishment should be applied and how to approach AI, taking that into account. That is all.
Mr. Kozuka: Thank you very much for your comments, Mr. . I am very sorry, but we ask that each person stay for about 4 minutes, so please be aware of that. Next, Mr. Ochiai, thank you.
Ochiai Member: On page 3, as a basic point of view, as you explained at the beginning, it is important to determine short-term policies, but I think it would be bad if they are separated from the medium - to long-term. As a point of view for consideration, I think that agile governance will be important. Complex data processing by AI, etc., and the cooperation of multiple systems may significantly limit predictability and controllability regarding the future. In the administrative regulation, it has already been used as a implementation and performance approach in digital principles, etc., but I think it would be good if agile governance could be considered in the sense that it will lead to the acceleration of voluntary risk response and improvement measures by business operators.
Next, as a point of view, in future discussions, I think that autonomous driving implementation will be implemented premised on physical cyber infrastructure, so I think it is important in what environments implementation will be implemented. In addition, regarding the relationship between technological change, while autonomous driving technology is approaching social implementation, it is also organized in the materials of the National Police Agency that it is effective in reducing traffic accidents and easing traffic congestion as a whole, and I think it is necessary to discuss the realization of the possibility of autonomous driving. In such a situation, I think it is necessary to avoid the fact that the requirements for legal sanctions, despite being safe as a whole, will become more detailed and more difficult, requiring a high degree of duty of care, and having a chilling effect. On the other hand, autonomous driving itself has become a very important issue in the Year 2024 problem, a society with a declining population.
The next slide is a reference material. The next page is also a part related to the necessity of reference materials. And the next slide, please. In terms of the perspective, the involvement of passengers in autonomous driving is limited at Level 4 or higher, or does not exist at Level 5, and I think it will be difficult to contribute to the prevention of specific accidents. In addition, I believe that it is necessary to aim for a form in which passengers are exempt from liability for accidents unless there are special circumstances, including preventing the chilling effect that you pointed out. As for the development and the service provider of the autonomous driving, I think it is necessary to fully compensate the victims in civil cases. However, as Mr. Suda may have mentioned, the reward to the development and the service provider as the perpetrator and the deterrent effect of the third party are not necessarily implemented by executing the punishment. If anything, I think it is important to design punishment so that it has an incentive structure for sound improvement. Please take the next slide. In terms of the direction of medium-term consideration, in relation to administrative regulation, I believe that it will be necessary to develop regulations that include performance regulations, etc., for the purpose of continuous improvement and prevention of damage expansion while specifying what will become of cyber-physical infrastructure.
In addition, with regard to civil liability, it is necessary that compensation be made appropriately regardless of changes in liability systems, and in that sense, I think it is important to develop a compensation insurance system. With regard to liability systems, from the perspective of incentives for development operators and service providers to improve their risk management, I believe that it is necessary to reorganize liability systems, including liability related to systems, including the Self-Compensation Act and the Products Liability Act. In this regard, I have attached the recommendations and abstracts of the research conducted by the research institute to which I belong, in which Dr. Inadani and others participated. design
Next, with regard to criminal liability, I believe that there is a need for a direction in which criminal liability due to personal negligence is, in principle, discharged. Punishment for some administrative violations of regulation and intentional acts needs to be maintained, and cooperation in investigating the causes and prevention of accidents and their expansion need to be strengthened. Overall, the development of criminal sanctions systems has parts in common with the points Mr. Inadani talked about, so I will omit it. In relation to accident investigations, I believe it is more clear in the case of autonomous driving that advanced analysis is required, including for aircraft accidents and the Fair Trade Commission of the Antimonopoly Act. I believe that there are two perspectives on this. I believe that it is important to develop respective investigation systems from the perspectives of both information sharing within the industry and accident handling.
With regard to the criminal and civil liability, etc. in today's materials, I believe that it is expressed as if the liability extends to all parties concerned in the abstract, without assuming specific situations. I believe that it is inevitable that there will be a chilling effect if the situation continues as it is, so I believe that it is important to show an arrangement so that the scope of liability does not become unlimited while discussing the rational sharing of liability by assuming specific situations and materializing the matter.
Next is the final slide. I believe that it is important to discuss measures that can be implemented without legal amendments and basic principles in the Basic Act first, to the extent that they do not involve a change in the basic concept, in order to proceed with consideration with a sense of speed. In addition, I believe that it is important to clarify them in the guidelines and so forth, based on the fact that the development of the guidelines may serve as a safeguard to a certain extent in terms of actual legal judgments. In addition, I believe that it is necessary to urgently advance the arrangement and discussion of the accident investigation system, based on the fact that the de facto predictability of the pursuit of liability will be increased by specialized organizations taking on the roles of the accident investigation. In addition, if we are to consider demonstration under special provisions, I believe that it is possible to implement the establishment of virtual lanes in the National Strategic Special Zones and so forth, such as the conversion of the burden of proof on orbit, by limiting the area. I believe that it would be good to deepen discussions based on experiments of various methods such as this.
Mr. Kozuka: Thank you very much for your comments, Mr. . We are going to discuss various things based on your knowledge, but I heard that it is taking 20 minutes, which is more than the schedule, so I would like to ask for 4 minutes per person. I heard that the next Mr. Goto entered in the middle of the meeting, so I would like you to think about it for a while. Then, Mr. Sakamaki, could you make a statement?
Sakamaki Member: My specialty is criminal law, and in particular, from the perspective of classical criminal law, criminal law cannot be moved only by economic rationality, and it is related to human passions, so I will talk about such matters. First, I would like to state my awareness of the problem of criminal liability caused by the classic criminal liability of drivers caused by automobile accidents and the criminal liability caused by the negligence of manufacturing and operation managers of vehicles, etc., and second, I would like to state my awareness of the problem of the relationship between criminal investigations into traffic accidents and investigations into the causes of accidents. In addition, the current situation of criminal liability is illustrated in Handouts 2, 6, 1 and 2 today, so please take a look at it together.
In automobile accidents, criminal liability, that is, the realization of the application of punishment, is a problem only in so-called accidents resulting in injury or death. It must always be kept in mind that there will always be a living victim or, in the case of the deceased, a bereaved family member. Mr. Tadashi Takahashi, who will speak as a member today, has been making efforts for a long time to improve and realize the rights and interests of crime victims, and I myself have been in contact with claims and opinions related to support for crime victims at the Legislative Council on the expansion of the rights and interests of victims and legislation. In addition, due to public awareness activities by victim support organizations, social interest in consideration for victims has increased considerably. On the other hand, although it is the same crime, I don't think there is no social recognition that the circumstances are different between a heinous murder and injury or death caused by a car accident. Of course, aside from automated driving, which has become a form of reckless driving that is close to murder, there are opinions that since deaths and injuries caused by automobiles are accidents, they can be treated differently from ordinary crimes, and that the current situation in which any driver can be a criminal is acceptable. However, in Japan, the criminal responsibility for fatal traffic accidents is probably at a considerably higher level than in other countries, and the punishment has been raised to the extreme level, which is considerably higher than for other crimes. Aside from whether or not it is appropriate as a legislative policy, this is an expression of public sentiment toward fatal traffic accidents in Japanese society through the Diet, and it must be seen as an expression of the feelings of retribution and punishment of the accident victims and their bereaved families, as well as an expression of social sympathy for them. I think this is a point that must be given considerable importance. In light of the circumstances described above, as a long-term perspective, it is almost impossible to relax or reduce criminal sanctions against traffic accidents resulting in injury or death, to expand the scope of financial sanctions and damage compensation through such measures, to create a prosecution route different from ordinary criminal prosecution, or to decriminalize a type of traffic accident resulting in injury or death, although each of these measures has a reasonable aspect from the viewpoint of pure system design. However, with regard to administrative regulations other than the Road Traffic Act that are related to regulation in traffic accidents or punishment provisions related to the manufacture of automobiles, there may be room to review the effectiveness of existing criminal sanctions such as fines and decriminalize some of them from the viewpoint of ensuring the effectiveness of individual provisions.
Next, since traffic accidents resulting in injury or death are subject to criminal liability, the investigation of the cause of the accident is currently conducted as a criminal investigation based on the Code of Criminal Procedure. The purpose of the investigation is to collect evidence for future criminal prosecution and punishment and to determine the negligence of the driver. Part of the results may contribute to the clarification of the cause of the accident, but there are limits to the use of the results for the prevention of future accidents and the improvement of equipment. In addition, this is a matter of logic, but the argument that the obligation to report an accident by a driver under the current Road Traffic Act may be in conflict with the constitutional right to remain silent (privilege to refuse self-incrimination) because it may be directly linked to the criminal prosecution of the person in question has been settled in case law. However, since the problem still remains as a legal argument, I think it is logical to create a comprehensive accident investigation organization in the future and grant de facto or de jure immunity from prosecution or criminal immunity to, for example, technical supervisors who may be subject to criminal liability for negligence in the course of business in relation to the purpose of the investigation to contribute to the investigation of the cause of the accident and future improvement and prevention. However, this also requires sufficient caution in relation to the feelings of the victims and the feelings of the general public toward punishment. The above is brief, but I have expressed my rather classical recognition of the current situation.
Mr. Kozuka: Thank you very much for your comments, Mr. . Then, Mr. Sato, I'm afraid it will take about 4 minutes.
Member: Please start from page 2. I am a practitioner, so when I consider various issues related to automated driving and mobility services, I would like to point out the points that I am usually concerned about. First of all, with regard to criminal liability, which has been raised as a topic, it is said that if the criminal liability of manufacturers is brought to light, it will hinder the development of automated driving. Regarding the liability of officers and employees of conventional automobile manufacturers, it is difficult to prove negligence, so it is understood that it is allowed only in extreme cases, for example, cases in which recall concealment was carried out. In the case of automated driving vehicles, I believe it will be more difficult to prove due to factors such as the black-boxing of AI inference processes, so I believe there are realistic cases in which neither party is criminally responsible, and in the first place, there are cases in which none of the parties is at fault. On the other hand, in theory, it is still said that the presence or absence of negligence will be monitored in each accident, so I believe that a certain level of arrangement is necessary here. As a medium - to long-term Issue, I have written in the materials that it is possible to consider granting a certain degree of immunity in exchange for a certain amount of information and cooperation. Even if it is difficult to do so, I believe it would be difficult to list all the cases, but I believe it would be possible to clarify the cases in which the person would not be held criminally responsible, such as safe harbors.
Regarding Material B, although it is theoretically possible that various parties other than the manufacturer, such as the person in charge who is involved in the specified automatic operation, have been negligent in various ways, I believe that the cases in which they have actually been negligent in relation to the operation are extremely limited. Although it is the same as the above, I believe that the same idea can be taken because the United Kingdom is also considering a direction in which the user will be exempt from liability, as it is possible to make certain clarifications.
Next is civil liability. Before going to the PL Law, first of all, as written in the upper right corner, regarding accidents resulting in injury or death, the Ministry of Land, Infrastructure, Transport and Tourism has arranged that up to Level 4 accidents will be solved by the operator's liability under the Automobile Liability Act. Therefore, I believe that we should basically discuss based on this premise. If so, product liability and the like will be the subject of discussion.
One thing about "defects" is that there is a safety standard under the Road Transport Vehicle Act, but in relation to civil liability, it is the minimum safety standard for driving on public roads, so there is a possibility that even if it is satisfied, it does not mean that there are no defects. In the first place, as I think it was already mentioned by some of you, whether the current concept of defects is the same as AI, that is, in the case that an accident occurs in a situation where it is stochastically safe as a whole, if the accident is a case that can be avoided by ordinary humans, from a post-hoc perspective, it may not have been safer than humans in the end. There is also a safety standard that is equivalent to or higher than that of a careful and competent driver, so if you look at the accident, the system does not satisfy the standard, and as a result, it may be defective. There is such a problem, so I think it is necessary to sort out and clarify our thoughts on in what cases it can be said that there are no defects.
In addition, as stated in the EU on the lower right, a certain level of discussion is underway regarding PL. Software is intangible and thus is not subject to the PL Act in the first place, but in the EU, it is discussed to recognize the PL responsibility for software related to AI and to shift the burden of proof. I believe that it is necessary to discuss whether the current concept of court cases in Japan is sufficient to protect victims.
In addition, as pointed out in Part C of the materials, it is not in line with modern times that the standard time for determining defects is the time of delivery of a car even though it is sold on the premise of updating. We may respond to this point by interpretation in the short term, and we may consider amending the law in the medium to long term. I believe that offset for negligence has basically been based on a series of court cases in the past, but I recognize that it will be necessary to reorganize the cases based on the premise of automated driving in the short to medium to long term Issue.
Property damage is outside the scope of self-compensation, so it is a point of contention. On the other hand, if the current service of automated driving is assumed, it is basically insured, and I think that it will be mostly resolved by the discussion on the PL Act, but it is a point of contention.
Basically, the accident investigation is the same as what Dr. Suda said. However, I believe that disclosing the results after conducting an investigation with compulsory authority may lead to further safety. I believe that it is necessary to pay attention to trade secrets and victims, and I think that such an idea may be possible. The last intersection part is detailed, so you can skip it.
Mr. Kozuka: Thank you very much for your comments, Mr. Then, Mr. Takahashi, please give us your opinion.
Takahashi Member: Mr. Sakamaki earlier said that it was a classic discussion, so I was humbled, but I would like to talk about it more classically. First of all, I would like you to look at page 2 of my opinion paper. In the first place, who is the entity that uses science? Of course, they are human beings. If the person is limited to a country, he is a citizen. If the public is not convinced, the trust in science will be lost. In such a case, the public will naturally walk away from science. Nevertheless, if we pursue the development of science, it will be the runaway of science. There is a saying in the world of justice, abuse of majority rule. The safety net to that is the rule of law. On the contrary, the safety net against judicial recklessness is popular sovereignty. The safety net against automated driving is the sovereignty of the people, in particular, the consensus of the people. So who are the people who need to have a consensus? Of course, this is a user, an automobile manufacturer, and an automobile insurance company, but the person with the strongest interest is the victim who has been harmed in the past and the future victim who will be born. Because you can get your money back if you make it. But life cannot be taken back. Therefore, it is the crime victim who has the strongest interest. Next page, please. Then, how can we gain the trust of crime victims? In order for the victim to look forward or to put an end to the incident, there must be some kind of compensation or compensation. Economic compensation is a matter of course. But that's not enough. I was the vice chairman of the National Association of Crime Victims until the dissolution of the association, and I was mainly dealing with the restoration of the rights of the bereaved families of murder victims. From the perspective of the bereaved family of the murder case, I believe in the Government of Japan precisely because I believe that the Government of Japan will relieve the bereaved family of their regrets and act for them. Then, what about the victims of traffic accidents? Again, there are several patterns.
If it is a simple negligence caused by a moment of carelessness, or a case of carelessness that can be caused by anyone, some people just want an apology. On the other hand, I have seen many families of victims who strongly want criminal responsibility to be taken, and also want to be sentenced to imprisonment because suspension of execution of the sentence is equivalent to innocence. In particular, in the case of an accident in which there is a fine line between the crime of dangerous driving resulting in death or injury, which is close to intentional, the story is even more different. It is the wish of most of the bereaved families that the punishment must be properly imposed and that they must be sentenced to imprisonment. If only the runaway of science goes ahead without accepting the feelings of the victims, I think that trust in science and trust in the people will be lost. By the way, what are the cases in which science is wrong? There was a Mars probe called Mars Climate Orbiter in 1999. The Mars rover was well on its way to Mars. However, the landing failed. The reason why it failed was because NASA was an international standard, so it was a metric standard. However, the manufacturer that made it was yard standard. As a result, the 13.5 billion yen was reduced to nothing. But it is still good because people did not die. There are cases where people have died. There was an airplane accident in 1999. The Controller said 1500m, but the First Officer mistook 1500m for 1500 ft. There was also an accident in which they told the captain about it, gave him a wrong judgment, crashed and all of them died. Such mistakes are naturally foreseeable and can be avoided as a result. If no one is responsible and no one is criminally responsible in such a case, will the crime victim be convinced? Of course, there are many kinds of traffic accidents. People who are 80 or 90 years old may die.
But the most tragic thing is when a small child dies. When he dies at a crosswalk.
At a time like this, if such an accident occurs due to a mistake in the system or a very simple mistake by the person who created the system, can the victim be convinced by providing uniform criminal immunity? I don't think this makes sense. If the victim can no longer trust the judiciary, the order of society will be disturbed. That's not all. It will hurt the dignity of the victim. I think that science is always wrong. However, to be exact, science does not make mistakes. As is clear from the previous example, science and technology are used correctly, and the theory construction, which is the premise, is made wrong, or even if the theory is correct, human beings make mistakes in the input of conditions and the method of doing things. In other words, it is mostly caused by human error. It is the same in automated driving, and it is impossible to assume and incorporate all the condition setting and risk factors from the beginning. I am clearly opposed to the idea of blanket immunity from criminal responsibility in such cases. For example, please show me Figure 2 of my written opinion. This is a pedestrian crossing without a traffic light. What is the Road Traffic Law at this time? In this case, there are no oncoming vehicles and no pedestrians are allowed to cross the street. But according to the Road Traffic Law, you have to reduce the speed to a level where you can stop at a stop line. Because there is a hedge on the left side, and there is a possibility that Pedestrian A, a small child, is hiding. In such a case, it is not possible to say that "it is clear that there is no person." Therefore, you must be sure to slow down to a speed where you can stop at the stop line. Does automated driving's current technology cover all of this? I don't think so. Show me the following diagram. This is really surprising. I'm running and I see a green light. No one slows down here. An accident occurred when a motorcycle came from the right without slowing down. In such a case, the Yamaguchi District Public Prosecutors Office charged him with involuntary manslaughter. The reason is that I said that I have an obligation to go slow. Why do we have to drive slowly? If you look closely, there is no traffic light on the opposite side of the crossing road. In other words, this traffic light is not controlling an intruding vehicle from the intersecting road, but is simply controlling a pedestrian walking on the crosswalk and a vehicle passing through it. Therefore, this is treated as an intersection without traffic control. And the logic is that you have to slow down because the visibility is bad. However, the judge acquitted him on the grounds that it was neither foreseeable nor avoidable. In response to this, at the request of the bereaved family of the deceased victim, I filed a lawsuit claiming state compensation for the accident, claiming that the accident occurred because there was a defect in the installation of the traffic light. The court ruled in favor of the faulty installation of the traffic light. At the same time, the culpable driver was found to be 50 percent at fault. Will the current automated driving technology be able to cope with such a problem? In the first place, there are many court cases based on the Road Traffic Act. And there are hundreds of thousands of intersections across the country where traffic is not controlled and visibility is poor. With all of this, I don't think automated driving Technology is doing a good job. There are only two solutions. I think it is either a revision of the Road Traffic Law to change all the existing court cases or a dramatic evolution of automated driving technology. But even in the latter case, accidents do happen. This is because human error cannot be eliminated 100%. Victims trust the judiciary, they trust the courts, they trust the state. I believe that we will never be able to obtain a national consensus if we ignore this background and uniformly exempt criminal liability. That's all.
Mr. Kozuka: Thank you very much for your comments, Mr. . Then, Dr. Nakahara, please give us less than four minutes again.
NAKAHARA, Member: My specialty is illegal acts under the Civil Law, so I would like to comment on matters pertaining to them. They are classified on page 32 of the Secretariat's Materials, but I think they are slightly biased toward the medium - to long-term Issue, so I would like to classify them from my own perspective.
As a short-term Issue, of course, we will sort out how the accidents caused by automated driving vehicles will be resolved under the current law, but I believe that considering legislative measures if necessary should also be considered as a short-term Issue. Not only the Liability Act, but also the Products Liability Act will be subject to consideration. First of all, I do not think it is a problem that the Liability Act can be applied to automated driving vehicles. I think that problems in interpretation, especially the concept of the operator, can be created, but in any case, the responsible party can be considered. The problem is rather that the exemption of the automobile operator is recognized only under the very strict restrictions of the three requirements, and the application of the Automobile Liability Insurance Act is substantially to make the automobile operator the primary responsible person, which leads to the conclusion that the automobile operator, in fact, the insurance company, will make a claim to the manufacturer, etc. if the automobile has a defect, but is it OK to do so in the case of a automated driving vehicles, particularly a complete automated driving vehicles? Even if the operator makes a claim, it is difficult to prove the defect and the claim cannot be made. Furthermore, since it is difficult to prove the existence of the defect, the exemption requirement of the operator is not satisfied, and it is considered that the final burden will be incurred in the end. It is undesirable because it is conceivable that the operator may not always have full control over the risk of an accident. Furthermore, there is no contribution to the cause investigation. If such a situation is left unattended, it will occur sooner or later. There are various options, such as whether to amend the Liability Insurance Law internally, or to regulate the application relationship with the Product Liability Law by restricting the application of the Liability Insurance Law, or to enact another law, but I think that such discussions are still necessary.
On the other hand, under the current law, the liability of manufacturers, etc. is to be pursued under the Manufacturing Liability Act, but there is no problem with the applicability itself. Even if the software itself does not fall under the category of a product, the automated driving vehicles as a movable asset incorporating the software falls under the category of a product. However, as stated in the assumed issues in the secretariat materials, I believe that it is certainly necessary to respond to the update as soon as possible. Under the current law, the standard time for determining the applicability of defects and the defense of development risk is the time of delivery of the product in question, but if the production is scheduled to be updated, it is a matter of whether it is necessary to make a determination based on the results of the update. However, various other problems are expected, such as whether it is necessary to impose liability on the software provider and the development operator of the entire system, whether it is necessary to ease the burden of proving defects and causal relationships in light of the technical and scientific complexity of the product, and how to determine the defects of the product in the first place. Regarding the second issue, the easing of the burden of proof, the recent response by the EU may be an important idea, but it is a problem that has been pointed out in Japan for a long time and has not been solved constantly. What's more worrisome is the final flaw determination, and how will the algorithms be compared if, as you've pointed out, at least a significant part of automated driving will be done by AI systems? Isn't it always the case that latecomer AI systems with automatic learning are defective? If judgments are not made by comparison, I think it is safe to say that the question of how to make judgments is a short-term Issue.
As mentioned in the expected issues as a medium - to long-term Issue, the issue of automated driving vehicles seems to have significance as a starting point for the study of civil liability in age of AI. The story of product liability I mentioned earlier is not limited to automated driving. I hope this study meeting will be an opportunity to deepen the awareness of AI and civil liability issues in general.
For the time being, I would like to point out that there are various problems between AI and civil liability, and the problem of automated driving vehicles is limited to one type, in particular, a type in which damage is directly caused by the operation of the AI system itself. In such a type, the framework of liability without fault is easy to accept even from the perspective of conventional tort, and in fact, automated driving vehicles is also set up as a problem of how to arrange liability for service under the Automobile Liability Act or liability for defects under the Products Liability Act. On the other hand, in the case where humans trust and use the operation results of the AI system, for example, doctors use the AI system to make decisions, there will be cases where liability for negligence will be handled to a considerable extent in the future, and at least the discussion this time cannot be easily generalized.
On the other hand, there may be an argument about whether the liability without fault of automated driving vehicles is the same as what has been said so far, or whether it is the same. For example, the liability of the operator under the Automobile Liability Act has been arranged in law as a liability for danger based on the special danger of automobiles, but in the long run, automated driving vehicles may considerably improve safety. Under such circumstances, it may be possible to see that the main purpose of reaching an liability without fault is to respond to the risk of loss of responsibility by establishing an objective cause of responsibility in the form of an accident, free from the normative requirement of negligence or defect. In addition, if we place importance on a mechanism in which various parties are required to share the damage under the compulsory liability insurance system, and if there is no responsible party, compensation is made through the public system, liability without fault or liability in this context is merely a tool concept for damage compensation, and special risks are not essential, and I believe that there may be such a mechanism for damage compensation for AI systems in general. This is just one extreme view, and I do not support it. In addition, there are various systems, such as the need to incorporate a mechanism for claiming compensation, but in any case, I would like to consider how the Issue issue can spread to AI and civil liability issues in general while taking into consideration the automated driving vehicles issue. That's all.
Mr. Kozuka: Thank you very much for your comments, Mr. Yes, thank you very much. If all of you speak in about 4 minutes, I think it will be over at 6 pm, but I heard that there are many things you want to say, so I have received information from the secretariat that it may be extended by up to 30 minutes. In that case, we will consider receiving comments from people who have plans in advance and need to leave. If you have plans to leave the room, could you click the raise your hand button? In the meantime, I would like to ask Dr. Nishinari to do so.
Dr. Nishinari: , I'm probably quite different from you in my position, and I'm studying mathematics and physics, and from that perspective, I'm studying transportation, distribution, and people's flow. I believe there are various views, but for the time being, I believe that only expressways and dedicated roads will be difficult for both automated driving vehicles. Since people do not act within a predictable range, I personally believe that it is quite difficult to realize it in a mixed environment of people. In that sense, I am also participating in the SIP logistics project, and I think the most feasible application is to efficiently transport goods in the form of a human being in the front car and an unmanned vehicle in the next. I believe that the fastest way to put it into practical use is to have one person drive and pull about five vehicles. It is necessary to select various use cases, including how to think about accidents in such cases. In addition, I think it is necessary to consider whether there are various technical problems, such as who is responsible if the GPS causes a communication failure while driving. In addition, the trolley issue is famous, but in Germany, it is currently prohibited to conduct a program to compare lives. How should such a trolley issue be handled in Japan? As AI has been mentioned before, I think we will be in an age where we can think and judge by ourselves. I also use AI in my research, and Chat GPT has exceeded the researchers' expectations. The difficult question of how to handle these things. And as I heard from the secretariat, my main point is to share the data of the accident with everyone. In order to do that, how to arrange the format, even if different data appears in each company, it is impossible to analyze. The problem is now occurring in the logistics industry, and it is said that it would be good to share logistics data, but the logistics data of various companies does not match. I believe that the most important thing is to establish a system for governance by determining a standard format, making it mandatory, and sharing it with everyone at a place like the Safety Committee. In closing, I would like to recall the Apollo program a little bit. There were some people who died on the way, but it went ahead. I think the reason for this was the understanding of the people of Japan. The mission was shared by the people and the dream was shared. I think that social acceptance will not be created unless we share what automated driving is for. If there is a discussion that automated driving is really necessary for area distribution and area transportation, or for reducing accidents, I think it will become a culture in which everyone accepts even a small amount of trouble and takes risks. I believe that sharing is most important for what purpose. That's all.
Mr. Kozuka: Thank you very much for your comments, Mr. Yes, thank you very much. I understand that no one is raising their hand to submit it early at the moment. If you need to continue, please raise your hand. In order of the list, Mr. Hatano of the Japan Automobile Manufacturers Association, thank you for your cooperation.
Hatano Member: I am Hatano of the Automobile Association. The Japan Automobile Manufacturers Association would like to make a comment. At the Study Group on the Future of the mobility Road Map held by Mr. Digital Agency on July 12 this year, the Japan Automobile Manufacturers Association (JAMA) is currently watching the efforts of the three parties toward automated driving Level 4. We are introducing the Issue and the efforts toward realization that the industry is aware of. If we introduce all of this content today, it will take too much time. So, as an executive summary, I have organized the materials for today as the content we expect from this subworking. If you expand it a little and make it visible as a whole, we would like to share three main things that we expect from the sub-working, although we are a little busy. First of all, I would like to share the concept of service realization based on safety measures of the Trinity in the upper left. If we think that automated driving is a technology prepared by people, it is not necessarily perfect at this point. Then, within the range of services we want to provide, the figure on the left shows the safety and responsibility to be performed in the automatic service area in the form of this range. Although it is a little difficult to understand due to the color, after clarifying the service area, operating conditions, and range, the range in which the automated driving system can actually be independent, comply with the rules, and ensure safety does not necessarily match the range of the service area. In that case, it will be necessary to add security through infrastructure development and cooperative systems to the missing parts. In order to guarantee the public nature of infrastructure, for example, as mentioned in the previous discussion, we need to consider the importance of infrastructure, including the reliability of traffic signals and the issue of road maintenance. In addition, if a cooperative system is used, its functional arrangement and the clarification of responsibility will be clarified. Only then will we be able to cover the missing parts. On the other hand, if we think that traffic environments are not limited to automated driving, but are established in a society of sharing and coexistence with other traffic participants, there is a view that if traffic participants, not limited to people in the vicinity, firmly observe the rules, we can expect to ensure safety as a whole. Recently, there has been a lot of Issue, and there is a problem of how to comply with the rules for bicycles, but in fact, there are many traffic accidents that could have been avoided if pedestrians and bicycles had complied with the rules. In other words, as the number of automated driving trains and regular trains increases, I would like to ask for your opinion on how the responsibilities in the event of accidents should be in accordance with the roles to be played. Based on this concept, how safety is actually secured is, as shown in the upper row on the right, the revision of the law for automated driving has already been carried out over a long period of time through public-private cooperation in various points. This is the essential safety requirements for automated driving referred to from the Safety Technology Guidelines for automated driving issued by the Ministry of Land, Infrastructure, Transport and Tourism. It is arranged with the broad idea that reasonably foreseeable and preventable accidents resulting in injury or death should not occur. In the service area shown in the figure on the left, safety is located in roughly four quadrants. The usual case of foreseeable accidents is that automated driving is programmed to avoid them, so they don't happen. If the industry expands this range as much as possible, it will be of paramount importance to expand the predictable range and clarify the avoidable performance. Therefore, what is expected from the subworking is to make reasonably foreseeable accident cases finite while maximizing them as much as possible. The industrial perspective is that we want to avoid the fact that nothing will be produced as a result of continuing to pursue predictability and infinite consideration. In addition, if we can avoid accidents within the limited range, we will clarify the criteria for determining what kind of performance it is, and if we share them, we will clear the range in which accidents do not occur. However, there are cases in which it is difficult to avoid an actually foreseeable accident even if there is no defect or negligence. Accidents can occur even if there is no fault or defect on the automated driving side, no matter how hard it is for a person to jump out in front of a car that is physically impossible to avoid. In such cases, there is still a possibility that social acceptability can be quantitatively treated as a case that is difficult to avoid, so I believe it is important to consider the standard of social acceptability for cases that are difficult to avoid and have no defect or negligence. Furthermore, of course, there are still parts where we do not know what will happen, which is difficult to predict, so I think it is necessary to discuss how to handle such cases in the future. However, I believe that sufficient time for consideration and careful discussion are essential for such discussions to reach a conclusion, so I would like to discuss this with the industry. For your reference, there are various examples of incidents in the lower left that are difficult to avoid even if there are no defects or faults. For example, even if it is within the range where safety and responsibility must be observed in Part B, it may become apparent as an event that is the fault of others depending on the cybersecurity. In such a case, it may be difficult to determine who is actually responsible for the incident, so I think that the provider, the approver, the user, and the operator will be responsible in various ways, so I would like to ask for careful discussion. That's all from me.
Mr. Kozuka: Thank you very much for your comments, Mr. Yes, thank you very much. Then, Mr. Harada, please come again within four minutes.
Member: I'm Harada from Kyoto University. I specialize in administrative law, so I would like to talk only about issues related to administrative law. First of all, regarding safety standards based on the Road Transport Vehicle Law, I think that the construction of a framework based on performance regulations, which Professor Inadani mentioned earlier, the utilization of private sector's technical standards, and the use of third party certification can be considered along with other fields. Furthermore, it seems that the effectiveness can be ensured by linking it to civil liability or insurance systems. From the viewpoint of administrative law, what is very difficult is what to do with the Road Traffic Law. In the short term, I think the big question will be whether to continue to maintain the system of permission for specific motor vehicle operation that is currently legislated or to create a separate legal system premised on a system in which individuals own automobiles. In addition, among the requirements for permission for the current specific operator, there is a requirement that is slightly different from the Road Traffic Act, such as the convenience of area residents, and it seems that the question of how to consider this remains. Next, with regard to the medium - to long-term issue, this is much more difficult. The current Road Traffic Law basically imposes liability on the driver, and the driver and criminal liability are closely linked. It is a structure that ordinary administrative laws and regulations do not have, and if there are no drivers in the automated driving, there will be a problem of how to reform the system. Therefore, for example, as in the case of nuclear damage compensation and artificial satellites, it may be possible to consider a method in which operations are permitted subject to damage security measures, or to strengthen the administrative responsibility of the manufacturer or the system design operator and to establish a mechanism for administrative judicial reference by a highly independent administrative organization. However, I think the issue of the extent to which the current mechanism of the Road Traffic Law, which centers on criminal liability, needs to be changed is not only an administrative law, but also an issue that needs to be coordinated with criminal law. Also, in relation to accidents, I think it is necessary to adjust the relationship with product liability in Article 2 of the State Redress Act, which was discussed earlier. In addition, in terms of administrative law in general, there is an extremely difficult issue of how to position this automated driving in the administrative law system and administrative law procedures based on digital and AI, but I think this will also be a long-term Issue. That's all from me.
Mr. Kozuka: Thank you very much for your comments, Mr. . Next, I would like to ask Team Leader Yokota of the Non-Life Insurance Association of Japan.
Counsellor Hitoshi Suga: Could you please move on to the next speaker?
Mr. Kozuka: Thank you very much for your comments, Mr. equipment, so we will move on. Mr. Yoshikai, could you please?
Yoshikai Member: I have been a prosecutor before and have experience in the practice of investigation and prosecution, so from that perspective, I would like to state my personal view on the short-term, mid - to long-term Issue of criminal liability. First of all, in terms of short-term Issue, immediate policies, and immediate treatment, as I mentioned earlier, if the police and the prosecution, which have the right to compulsory investigation, do not investigate at all in the event of a serious outcome such as the death of the victims due to the spread of automated driving vehicles, not only will the victims not be convinced, but they will not be able to respond to public opinion calling for clarification of the truth, and on the contrary, I believe that it will undermine the social acceptance of automated driving vehicles. In Japan, it is necessary to leave the possibility of investigation and prosecution in all cases for the time being because the understanding of plea bargaining and impunity, which are introduced in foreign countries, has not been sufficiently formed. However, punishment is considered to be a measure of last resort, and the pursuit of criminal responsibility should be carried out in a modest manner. If the victim can be relieved by civil or administrative measures, there may be cases where it is not necessary to pursue criminal responsibility. It can be said that the reason for Japan's cautious prosecution is based on this way of thinking. In addition, in order to pursue criminal liability, a high level of proof of facts is required, so it is considered that prosecutors should prosecute only when there is a high probability of conviction based on accurate evidence, but this is the same for traffic accidents. In the case of accidents caused by a Level 3 or Level 4 automated driving vehicles, when the driver is not expected to be the direct driver, it is expected that, unlike the investigation of traffic accidents so far, it will be impossible to make a decision on prosecution unless a wide range of investigations are conducted, including the investigation of whether or not the driver was negligent in the design, the production line, etc. Such cases are called special negligence, and even now, the bar to prosecution is generally high. In the case of special negligence, prosecution will be limited to cases where it is found that the judgment or action clearly deviates from the general level of safety in the industry. In that regard, if guidelines on the safety level of automated driving vehicles were created and observed, it is considered that there is no need to be intimidated by the risk of criminal liability due to accidents caused by automated driving vehicles. In terms of clarifying the cause of the accident and considering recurrence prevention measures, there is a limit to the disclosure of the results of the investigation into the cause of the accident by the investigative authorities due to the fact that the case documents are not disclosed in principle under the Code of Criminal Procedure. We believe that the publication of the results of the investigation into the cause of the incident should be carried out under the authority of the Investigation Committee specializing in automated driving vehicles. Next, in terms of medium - to long-term Issue, as you have already pointed out, in cases of special negligence, the success or failure of the crime of professional negligence resulting in death or injury to an individual becomes a problem, but the Penal Code does not have a provision for dual punishment, and the pursuit of responsibility on the part of the company may be insufficient. It is possible to consider a mechanism to pursue criminal liability of the company itself, based on the previous discussion on so-called corporate punishment, including the introduction of a provision to directly punish corporations. In addition, due to the difficulty of special negligence, it is expected that there will be cases in which investigations into accidents caused by automated driving vehicles do not result in prosecution, or cases in which prosecutions result in acquittal. In addition to the pursuit of criminal liability, it is necessary to consider the enhancement of systems to compensate victims at the expense of the national government. However, this problem is not limited to the accidents caused by automated driving vehicles. Earlier, I talked about the general level, but there is a problem that if we try a new technology that exceeds the general level of the industry, there will be a shrinkage effect. However, I think it is necessary to conduct a study in advance to evaluate the safety as legally reasonable. To this end, I believe that the construction of a cooperation system between the technical domain and the legal domain will also be a Issue. That's all for my opinion.
Mr. Kozuka: Thank you very much for your comments, Mr. Thank you very much.
Mr. Goto, we have been waiting for you. Please give us about four minutes.
Member: I am sorry that I did not prepare any materials, but I would like to say a few things.
First of all, I would like to ask a question. On page 3, the purpose of this discussion is to fully provide relief to the victims and promote responsible social implementation in automated driving vehicles using cutting-edge technologies. The relief of the victims will remain the same, but I believe that the image of a responsible social implementation differs from person to person. Since I specialize in civil law, I believe that what comes out along with the relief of victims is the prevention of accidents and the occurrence of damage. In that case, even if it is difficult to prevent 100% of accidents, it is socially problematic how to realize the optimal prevention in consideration of costs and the like. As a means to achieve this, not only civil liability but also criminal liability will play a part, and I believe that administrative safety regulation will also be introduced. This time, there is talk of an administrative disposition. I think there is a problem of what to do with the content of the safe regulation. I think the safe regulation itself is a technical problem. In relation to how the safe regulation is structured, I think there is a problem of how to consider civil liability and criminal liability. Regarding this point, there is a little bit of uncertainty about how to interpret the phrase "promotion of responsible social implementation." I believe that this is a long-term Issue, but if we do not consciously discuss it from as early a stage as possible, there is a possibility that it will be hindered in the middle of the discussion. So, I would like to ask how the Secretariat and other members are thinking. In addition, from the perspective that I did not mention just now, what has come up in the discussions so far is the victim's sense of retribution regarding criminal punishment. I don't say this much in civil liability, but I think it is important in terms of the significance of criminal punishment, but as a person who is not a specialist, I feel that it is quite difficult to understand in what cases this sense of retribution is important. From what I have heard from you today, I believe that Mr. Takahashi mentioned earlier that there are cases in which the victim's sense of retribution is strong, but not so strong, in cases that are close to intentional, for example, or in cases such as drunk driving or dangerous driving. I think it is really a small accident, but it may have been the argument of those who have argued for immunity from prosecution until now that they are afraid of criminal punishment being imposed on such a level. If that is the case, I believe that the debate on whether or not to impose criminal punishment will be different depending on the person, so I thought that if we do not assume the specific situation around here, there is a risk that the debate will not be consistent.
In addition, from the perspective of social implementation, as Professor Nishinari said, it is very reasonable to point out that we should emphasize and confirm what society is trying to gain by advancing this technology. What is the purpose of development, in particular, the prevention of accidents caused by human error, is important. If humans were to drive in the first place, it would be unnecessary for victims to be victimized. Although this is difficult for lawyers to see, I believe that such benefits should be recognized in advance. The above is the first point.
Another is about what kind of automated driving is assumed this time. There was a discussion about fully driverless operation of owner cars, but I have heard that whether this will be realized first will be a long time away. Although business use is also depicted instead of owner cars, I think that some of them are, for example, traffic services operated or monitored remotely. If this will probably be realized first, I don't think we should ignore this and discuss it. In addition, I think we should consider how remote monitoring will be positioned when it becomes Level 5. I feel that it will not be possible to start by discussing only those that will come in the distant future, so I feel that we need to discuss more realistic use cases by dividing the case into a few. That is all.
Mr. Kozuka: Thank you very much for your comments, Mr. Yes, thank you very much. Then, Mr. Fujita, who also waited, may I ask you a question?
Fujita Member: I'm sorry to have to participate from the middle of the day. There are various matters to be discussed in detail, but since the time given is very short, four minutes, I would like to comment on only one assumed issue on slide 32. I have not heard the report from the Secretariat, so I am sorry if there is any duplication, but I think it is very good to consider civil liability, administrative liability, criminal liability, and accident investigation in a comprehensive manner, and to consider civil liability at the same time as product liability, operator liability, and tort liability. Regarding automated driving, I was afraid that the legal regulation would be built in pieces without a comprehensive landing point, so I think it is very good that a forum for comprehensive consideration was established. Based on that, as shown in Slide 32, I think it is realistic to discuss the issue separately for short-term Issue and medium-to-long-term Issue. However, as a short-term Issue, there are several individual issues that have been raised, but I think it would be better to explicitly discuss not only such fragmentary policies but also the basic perspectives and goals of the institutional automated driving related to design from an early stage. Aside from whether or not it can be realized in the short term, I think it is better to discuss the basic perspective as a short-term and long-term perspective early. Specifically, it is the basic concept of safety required when operation is controlled by a system without the involvement of a human driver. This is a question of whether civil and criminal liability should be imposed on a person who causes an accident while operating a automated driving vehicles that uses a system that can greatly reduce the probability of accidents from a prior perspective and can offer much safer operation than humans. If the control of the vehicle at the time of a specific accident is evaluated after the fact and individually, if the control of the vehicle is considered to be negligent if a human driver drove the vehicle in such a way, it will naturally cause civil and criminal liability (the responsible parties may be the driver, the operator, and the manufacturer). I believe that the safety of the system in relation to the application of the safety standards under the Road Transport Vehicle Act must be judged from the perspective of the probability of preventing accidents from a prior perspective due to the nature of the approval. I would like to ask whether the same perspective can be taken in the ex post facto responsibility of the civil and criminal police, to what extent they can be taken in, or whether they can be taken in civil or criminal cases. Cases such as program updates, hardware maintenance, and cases in which a human driver is involved outside of ODD are considered to be human errors, so we will leave them aside. If we limit ourselves purely to the evaluation of the security of the design of the program, if we can consider the pre-stochastic improvement of security as the decisive factor for the burden of civil and criminal responsibility, we can draw a scenario in which civil and criminal responsibility and administrative responsibility are integrated and fused at the core of the basic and fundamental idea while leaving the difference in individual requirements. In that case, it will be a scenario in which various steps are taken individually from what can be done based on this direction. On the other hand, if this way of thinking is not acceptable, and if it is an accident that could have been prevented by human beings, it is necessary to consider that there is a defect if all accidents are not prevented as a result. It is a completely different story, and administrative responsibility and ex post facto civil and criminal affairs are considered on completely different principles. In that case, it is necessary to clarify the different principles specifically.
In any case, even if the final institutional solution must be realized as a medium - to long-term Issue, I think it is better to present the examination of whether or not it is possible to accept the idea that the probabilistic security in advance is the decisive factor in relation to civil and criminal responsibility as an explicit issue at an early stage. And if possible, a certain direction may be a vague direction that does not go to specific details, so I think it is better to show it. This is the basis of the institutional setting, but at the same time, it is a policy decision that takes time to be accepted by society depending on the content and its conclusion, or there is a risk that it will not be accepted in the first place, so it is not desirable if everything is turned over after various discussions and detailed discussions are accumulated, so I think it is better to be able to make a decision in advance to some extent. That's all from me.
Mr. Kozuka: Thank you very much for your comments, Mr. . I am very clumsy and it is already over time, but is it a situation where Mr. Yokota of the General Insurance Association of Japan can speak?
Member: This is Yokota of the General Insurance Association of Japan. As for the points at issue, I think they are mostly mentioned in the statements made by the teachers so far, so I would like to briefly talk about the current efforts and Issue from the perspective of the non-life insurance industry. As a material, I think the picture on page 22 of the secretariat material is the best. As stated in the lead sentence, since the degree of involvement of the operator or the driver in driving is reduced at the time of automated driving, there is a possibility that accidents caused by reasons other than mistakes on the part of the operator or the driver may increase. Therefore, it is assumed that it will take a considerable amount of time to determine the cause of the accident and which party is responsible. In light of the social and public nature of insurance, the first priority is to provide relief to the victim before investigating the cause of the accident. Therefore, we believe it is important to maintain a mechanism for providing relief so that the victim can promptly receive treatment expenses, consolation money, repair expenses, etc. First of all, regarding compulsory automobile liability insurance, there is a policy to maintain the conventional concept of the operator. Regarding automobile insurance, even if the insured person is not legally obligated to compensate for damage, most insurance companies already have a special agreement to promptly pay the insurance proceeds to the victims first. Therefore, I believe that insurance companies have already taken measures to respond to such accidents related to automated driving. On the other hand, after the payment of the insurance claim, the insurance company acquires the subrogation claim and makes a claim to the person who is truly responsible on behalf of the insured, which I think is the lower part of the figure, but I think there are several Issue on this point. I would like to mention three main points. The first point is that there are many players. In places where vehicle component manufacturers, software business operators, or people are involved, a wide variety of players are expected, such as remote monitoring and specific automatic operation supervisors. I believe that the scope of safety assurance and the duty of care required of these people is not necessarily clear at present. It is still difficult to identify the party responsible for such an accident, so I think it would be good if we could develop a law to do so. The second point is that in Level 4 automated driving vehicles, it is assumed that people in cars do not understand the situation at the time of the incident, so in order to investigate the cause of such accidents, functions and means to understand the operation situation on behalf of the parties concerned, for example, I think that useful data for investigating the cause, such as travel data of automated driving, is necessary, so I think that a mechanism to share such information is important. The third point is that we have been talking about loss of life, but in the case of property damage accidents, there are cases where the driver does not have insurance, and in such cases, it is necessary for the victim to directly pursue product liability and liability for tort. It will be quite difficult to prove it, so I believe that what kind of system can be considered will be an issue. I would like to confirm these points through the sub-working. Thank you very much. That's all from me.
Mr. Kozuka: Thank you very much for your comments, Mr. . All the members made their comments.
Many people pointed out a wide range of issues this time, so I would like the Secretariat to organize them first. At that time, I would like you to sort it out by paying attention to whether it is a problem of a difference in approach, for example, a classical approach or a very new agile governance, or a problem that the required safety and the purpose to be pursued as a society are different, as Dr. Fujita said. I would like to make just two quick comments. One is that in the area of data sharing, it is important to investigate the cause of the accident. Many people have said this today, but before that, data includes so-called trade secrets, etc., and there is a debate over who owns the data. I would like to point out that it is important to share data at some stage, even if it is voluntary at first. Next, as you pointed out in the middle of the explanation by the Secretariat, development in automated driving is currently done by combining rule-based ones to some extent, but there is a technical idea that it completely depends on machine learnings, etc., and I think that the system may change depending on such differences in technical ideas, rather than differences in levels 3, 4, and 5. I would like you to keep this in mind. Well, I was actually planning to have a free discussion, but since it's already past the time, I would like to omit that and end today's meeting. Finally, I would like to receive a communication from the secretariat. Mr. Hasui, Director-General, may I ask you a question?
Mr. Hasui: , I would like to thank you very much for your various opinions today. If you would like to give us additional opinions, please email us at the Secretariat by the end of this week. We would like to publish the materials of this meeting on the Digital Agency website no later than the end of this month. If you do not mind, we would like to publish the materials received from each member as well. If you have any questions, please contact us individually. We would also like to publish the minutes of the meeting on the Digital Agency website after confirming the content with the members. The next Sub-Working Group is expected to be held in January, but we will contact you later on the schedule. Then, I would like to conclude today's first Sub-Working Group on the Review of Social Rules in age of AI. Thank you very much for your kind attention today. automated driving vehicles