Skip to main content

This page has been translated using TexTra by NICT. Please note that the translation may not be completely accurate.
If you find any mistranslations, we appreciate your feedback on the "Request form for improving the automatic translation ".

Digital Legal System Working Group (1st) of the Digital Related System Reform Study Meeting

Overview

  • Date and Time: Friday, October 27, 2023 from 2:00 pm to 4:00 pm
  • Location: Online
  • Agenda:
    1. Opening
    2. Proceedings
      1. Progress in the Survey and demonstration projects on the Development and Utilization of digitalization and law Data for Legal Affairs
      2. Research and perspectives on AII and law
      3. Results of experiments on legal affairs assistance using AI, etc.
      4. Questions and Answers and Exchange of
    3. Adjournment

Materials

Minutes

Secretariat (Nakano): Mr. Hasui . I would like to open the first meeting of the Digital Legislation Working Group.
In response to the progressive reorganization of Digital Extraordinary Administrative Advisory Committee on October 6, this working group will take over the discussions of the digitalization Review Team of the Legal Affairs Department and will hold a meeting to discuss the construction of a process and system to confirm the conformity of new law, etc. to the digital principles, the digitalization of legal affairs, the development of the base registry of law data, and the promotion of the utilization of law data.
The holding guidelines, operating guidelines, and members of this working group are as distributed in Materials 1 to 3, so I will omit the explanation today.
Members are invited to participate online today. Member Horiguchi is absent due to personal reasons.
Next, before we begin the proceedings, Mr. Tomiyasu, Director-General of the Digital Agency Regional Bureau, who will newly serve as the Chief Investigator of this Working Group, would like to give an address.
Mr. Tomiyasu, please.

Director-General Tomiyasu: My name is Tomiyasu, and I am the director-general of Strategy & Organization Group in Digital Agency, . I will serve as the chief of this working group. Thank you for your cooperation.
As the Secretariat has just explained, the Digital Administrative and Fiscal Reform Council was established on October 6 as a result of the progressive reorganization of the Digital Policy Consultation. The Digital Administrative and Fiscal Reform Council has also instructed us to continue to consider the digitalization of legal affairs in Digital Agency, so we would like to ask you to consider the same.
As such, we will change the structure from the digitalization Study Team for legal affairs to the Digital Legal Working Group, but we look forward to your continued support.
In addition, the Review Team has held a total of eight meetings so far. Thank you very much for your great support. We look forward to your continued support.
At today's meeting, I would like to report on the status of the survey and demonstration projects that Daiichi Hoki Co., Ltd. and FRAIM Co., Ltd. have been entrusted with as the Digital Agency project since April this year. In addition, Member Tsunoda and Associate Professor Kano will talk about the research and prospects on AI and law. In addition, I would like to explain the legal affairs support experiments using AI that are being conducted by employees in Digital Agency.
Although it is a lot of content, I would like to have a lively discussion today as well. Thank you very much.

Secretariat (Nakano): Mr. Hasui Thank you very much.
Today's agenda is as I sent you in advance.
I would like to move on to Item 1. I would appreciate it if Daiichi Hoki Co., Ltd. and FRAIM Co., Ltd. could explain the progress of the "Survey and Demonstration on the Development and Utilization of digitalization and law Data for Legal Affairs" in about 20 minutes.
Hello, Mr. Umemaki of Dai-ichi Hoki Co., Ltd. and Mr. Miyasaka of FRAIM Co., Ltd.

Dai-ichi Hoki Co., Ltd. (Mr. Umemaki): Thank you for this opportunity.
My name is Umemaki from Daiichi Hoki Co., Ltd.
Since you have limited time, I would like to explain Material 4 you have at hand.
First of all, could you please take a look at page 3? I would like to explain the whole picture of this report. Items 1 to 6 are in bold type. I will explain mainly those items later.
Could you please move on to page 4? I would like to start with an overview of the business analysis of legal affairs. First, regarding the digitalization of legal affairs, we have been conducting hearings on the fact-finding survey of the business flow of legal affairs with the cooperation of Mr. Digital Agency and Mr. ministries and agencies in order to grasp and analyze the state of inefficiency and burden of the work in the planning work of the current law, and to present measures to solve the Issue existing in legal affairs and methods that are considered to be useful for the digitalization of legal affairs. In addition, as a future plan, we plan to grasp the time required for the business content identified based on the results of the hearings, and proceed with the analysis of quantitative measurement and quantitative estimation.
Below, the outline and purpose of the hearings are as follows. Among them, the five laws listed in (3) were actually heard. As listed in *, the selection was made based on the assumption that there would be differences in the business flow depending on the law form. Going back to (2) above, in the implementation, we basically conducted a total of three hearings for the person in charge of overseeing the creation of each bill, the person in charge of planning who wanted to hear the details of the work and the sense of burden in depth, and the person in charge of examination who wanted to hear from the perspective of checking the General Affairs Division of the Secretariat.
Next, please see page 5. Here, I have listed six Issue of legal affairs that have been clarified so far by the comments made by the person in charge based on his actual experience in the hearing.
First of all, it has been pointed out that it is quite difficult to master the editing tool of the 5-piece set used for each document to be examined by the Cabinet Legislation Bureau and Cabinet requests because the relevant materials such as draft articles to be prepared in legal affairs are required to be written in a strict vertical format, and that it takes a little time.
The second part is a Issue related to the pre-amendment provisions that are old in the old and new comparison table. It was found that the old state changes depending on the enforcement date, and the enforcement date can change in the course of consideration, so a correction occurs every time a change is made.
Regarding the third point, I heard that you spend time looking for an appropriate example because you need to check the example every time you revise the text.
The fourth point is the effort related to the inspection of the materials. Whether the contents of the articles before the revision and the contents of the old and new comparison tables are consistent is being inspected in a multi-layered manner, with particular emphasis on this. One of the inspection methods that has been conducted so far, the reading and matching work, has taken a lot of personnel and time, depending on the volume of the bill.
The fifth point is that in the preliminary examination by the Cabinet Legislation Bureau, the number of pages of materials to be submitted for examination is large, and it takes time to prepare and print them.
The sixth point is that it takes time to prepare materials to be submitted to the Diet as reference materials for the bill, so I could sense the magnitude of the burden around that.
Next, on page 6. Prior to the hearing, I created a business flow to understand the overall picture of the legal affairs business flow from the consideration of the bill to the submission to the Diet. This is an excerpt, but while using such materials, I conducted the hearing by doing work that reminded the person in charge of the work so far. Thank you very much.
That's all for the overview of business analysis.
After this, would it be possible for Mr. Miyasaka of FRAIM Co., Ltd. to start?

FRAIM Co., Ltd. (Mr. Miyasaka): FRAIM. Nice to meet you.
From here, I would like to explain about the system.
First of all, on page 7, I would like to give you an overview of the prototyping user testing of the editor for legal affairs. As described in the upper square, the overview is based on the concept of an editor system that can directly edit the text data of the after amendment, so it is an editing function, an automatic generation function of new and old comparison tables, revised texts, etc., and a consistency check function. We actually conducted a user testing on the proposed functions and screen images of these parts, and confirmed and heard opinions on viewpoints that need to be considered in the future. Below is an overview of the user testing that was actually conducted, but it is the user testing that was conducted at the end of September. I would like to omit the explanation.
Moving on to page 8, from here to page 17, it is a page that describes the screen configuration actually used in User testing and what functions were used in testing. However, if I explain all of this, it will be difficult due to time constraints, so I would like to skip it and move on to the general review of User testing.
Please turn to page 18. Thank you very much. In regard to the summary of the results of the first User testing, first of all, User testing is conducting a testing using the existing systems, the systems owned by Daiichi Law and FRAIM, and the prototype being worked on in the PoC. The point that was particularly evaluated about the existing systems was the check function. Among them, the quotation check of other law and own law and the script term check. In addition, the automatic generation of revised sentences, the automatic output function, and the output function of the old and new comparison table were highly evaluated. The point of the quotation check of other law is that, for example, when an article is moved down, it will be affected if it is quoted from other law. It is called Hane Amendment, and it will automatically detect such things.
In addition, as the second part of the general comment, we received feedback that we would like to confirm the revision pattern that is not included in the testing Scenario, namely the partial revision of the Partial Revision Act.
After this, as a summary of the results of this implementation, the check function is described at the bottom of page 2, and the representative functions are also described on the next page, and the implementation summary for the functions is described. However, due to the time limitation, I will omit the explanation of this. Therefore, we would appreciate if you could refer to it as necessary.
In that case, I would like you to move to page 20, and from here, I am picking up three of the issues that are being considered mainly from the perspective of editors in this PoC, and it will be very long if I explain them in detail, so I would like to explain only the points of what issues are there.
My first question is about the response to the case where the text to be edited, that is, the old part of the so-called new / old comparison table, is changed in the middle of the text. The case is that the enforcement date of the revised law, which is assumed during the planning process, has changed, or it is six months later, and as a result, the old text may change or its version may change, or the content of the text to be edited may change due to the revision of another law, the change of the enforcement date, or the confirmation of the revision. In the event of such a case, the editor system is working on the revision, and the first issue is how to consider and respond to the case where the old text has changed in the middle of the process.
Next, I would like you to go to page 21. Regarding the second issue, this is a consideration of the possibility that future amendments, that is, amended laws that are floating in the unenforced state, will not be incorporated rather than the amendment under proposal. This is an issue of the examination items, such as how to detect the possibility of non-incorporation, whether or not it is necessarily correct even if it has been incorporated, and whether or not the form of the article is correct, and what kind of UI should be used to show users such things.
Please show me the next page 22. This is the last page in the editor section, but this is about the case where it is necessary to edit the text after the amendment by the amended law prior to the promulgation. I may not get into what you are talking about, but I would like to explain the premise a little bit. As many of you may know at present, the post-integration text published in e-LAWS and e-Gov is based on a bill for the amended law that has been promulgated, and is prepared by incorporating the amended law by the Judicial System Department of the Ministry of Justice and after confirmation by each ministry. It is assumed that this is a process to ensure accurate law data. However, it is assumed that the text before this process, for example, the text before submission to the Diet, is set as old and new, and the revision and amendment work is performed on it on the editor. Therefore, we would like to conduct an investigation through hearings in the future, including the business flow, on how to manage the accuracy of the text without this process. This is the third item to be examined.
There was a lot of volume, but this is the explanation as an editor.
In that case, next is page 23. Changing the perspective from here, next is the outline of efforts to expand the functions of the public API for law and other data. Therefore, it is an image that the users are not central government agencies but private sector companies. I would like to explain from page 23. As shown in the first part in the square at the beginning, the functions of the public law API are being expanded mainly by considering the time-series response of law and other data. In addition, the target is private sector users such as legal tech companies and law firms, and we are conducting user surveys while prototyping. In addition, the second part in the square, public testing using prototypes is currently being implemented, and the following event, a hackathon, is also being prepared for mid-November. In addition, the last part is that the public law API uses OpenAPI, and we have received mostly favorable opinions from users.
OpenAPI is explained on the next page, but I will omit the details and explain only two points. There are three points above this, but if I touch on the second and third points briefly, regarding the advantages for users of the first point, there is a place where you can try to run the API on this page, where you can actually do a trial run, not just looking at the API specifications. In addition, regarding the third point, it is an SDK that improves law efficiency when users perform service development using the public development API. It is also an advantage that you can easily prepare this. By such a provision method, we hope to lower the threshold for users to interact with the API and lead to the activation of law data utilization in the future.
Then, moving to the next page, here is also the disclosure of data, but up until now, it was API, and from page 25, I would like to briefly explain the outline of our study on public UI. This is also outlined in the square at the beginning, but regarding the positioning of this public UI, it is a prototype of a web service created using the prototype of the public law API I explained earlier. It shows how to use the prototype of the API I mentioned earlier to development users, and it is a kind of sample positioning, and at the same time as the prototype of the public law API, the prototype of the UI web system is being provided. I will omit a detailed explanation, but it is described on this page, so please refer to it if necessary.
Page 26, please. This is the screen image of the public UI. We have already used a prototype in User testing, and we have received many opinions from users. There is a public testing that is currently being implemented, and there will be a hackathon in November. We expect to receive many opinions. Therefore, the point of this initiative is to consider whether the selection of needs to be prioritized is the most important Issue.
This concludes my report on the current status of the law and other data disclosure function.
The next one will be the last one, but on page 27, I would like to move on to a status report on architecture and data structures. As outlined above, the main contents that we are working on are the design of data structures and the validation of necessary architecture based on the mechanism of version management of digitalization's integration text in consideration of the uncertainty of the enforcement date, which was discussed in Material 1 of the 6th meeting of the law Review Team of Legal Affairs.
There are some initiatives, but due to the time constraints, I would like to skip pages 28 and 29 and move on to page 30. This is mainly what we are working on, but based on the method of data structures that takes into account the uncertainty of the enforcement date proposed by Mr. Digital Agency at the meeting earlier, we are working on the validation of how much the Issue can be resolved. The characteristics are described as 1, 2, and 3 below, but the dependencies related to the revision. The relationship between the revision and the revision target is managed as data, and by preparing and managing the revised text as a file in advance, post-integration text management is performed in anticipation of the uncertainty of the enforcement date. The third is to manage the post-integration text of each version as a file in anticipation of the pattern before and after the order of the enforcement date. We are taking on the challenge of managing these using Git. We are working on the validation of whether this data actually holds or not through various assumed scenarios.
This is page 31. At the end of my explanation on this page, in the case of managing various types of data with Git, which I mentioned earlier, the Issue that seems to be seen at present is likely to have a performance problem. Performance and performance. There are concerns about performance Issue and data synchronization in the case of managing a large amount of law data on a repository. These are becoming apparent. In addition, since design is not usually performed using an RDB or the like, it is necessary to save metadata as files, and this may affect search performance.
It's been a long time, but I'd like to end my discussion of architecture here.
In that case, I would like to ask Mr. Umemaki to take care of you at the end.

Dai-ichi Hoki Co., Ltd. (Mr. Umemaki): .
Could you please take a look at page 32? In parallel with the specific surveys and demonstrations I have talked about so far, we are conducting surveys and analyses on the current status and future of digital legislation from the perspective of current examples of initiatives, the technologies required for future discussions, and the possible impact on society in the future.
First of all, from the perspective of collecting and analyzing information on the current status and future of digital legislation, we are conducting basic information search as described on page 32. We are conducting information search triggered by the results of foreign country surveys in last year's report materials and international workshops, and are conducting the survey with the cooperation of experts and students. In the future, we plan to conduct a needs survey on the future vision of the utilization of law and other data and advanced technologies for legal tech companies.
In addition, we are conducting a survey on the elaboration of the "Digital Legal Roadmap," which has been discussed by the Digital Extraordinary Administrative Advisory Committee Study Team of the digitalization Working Group. We are conducting a survey from the viewpoint of the natural language processing field on the technologies required in each phase of the roadmap as shown in the materials. In addition, we are conducting a survey on the advanced implementation fields in foreign countries, and we are also conducting a survey from the viewpoint of public law on the services that can be realized by advancing the roadmap phase, the impact on society in the future, and the necessity of regulation.
In addition, what is shown on page 33 is an extract from a list of information collected on related cases of digitalization's efforts in legal affairs and the utilization of advanced technologies. We plan to use it as basic data for investigation and analysis.
Finally, on page 34, the future schedule of this investigation and demonstration is summarized in a list from October to March next year. I have listed the items as the main schedule.
That's all for the report from us, Daiichi Hoki Co., Ltd. and FRAIM Co., Ltd.. Thank you very much.

Secretariat (Nakano): Mr. Hasui Thank you very much.
Regarding the law API Hackathon scheduled to be held next month, I have written about it in the chat, so please refer to it.
Regarding agenda 1, I would like to set aside about 20 minutes for Q & A and exchange of opinions. If you have any questions or opinions, please raise your hand.
Member Yasuno, please.

Yasuno Member: .
Thank you for your presentation, Daiichi Hoki and FRAIM. I would like to ask three questions about the study of data architecture. The first question is, in the first place, I would like to confirm again where and how this data architecture is being studied for use. I think the correct answer depends on the purpose of the data structure. I think there are various reasons, such as whether you want to improve the search performance of the app, whether you want to make it easier to ensure that the data is correct, whether you want to make it easier to perform development, and whether you want to make it easier to participate externally. The first question is whether you are aware that this is actually being adopted by the editor you introduced and the public API for law data.
We think that there are many benefits to thinking about Git based on architecture. It makes it much easier to use existing Git functions and ecosystem assets. However, on the other hand, if you try to put Git in a way that is not common, it is expected that you will not be able to receive such Git-based benefits. In that sense, if the operation method deviates from standard Git, I think it is good to think about what architecture to adopt depending on the purpose fairly openly. That is my first point.
My second question is that even if Git is used for the architecture, it is stated that all versions of the integrated text are retained as files, so I would like to ask if it is assumed that those files are managed in the repository. This is because I recognize that it is a general best practice to manage intermediate products outside the repository, rather than managing them in the repository. For example, compiled or calculated results can be generated on the application or edge, and if they are not included in the repository, the complexity of management will increase. I may not understand what this assumption is, so I would like to ask you.
The third point is the performance aspect, and I would like to ask you about the performance aspect because you mentioned it in the section of future concerns. I would like to ask you about what kind of operation you are concerned about the performance, such as the file size of the repository, the time it takes to commit or resolve conflicts, or the processing time. My recognition is that Git is quite sensitive to the size of the data in each file, and the performance decreases greatly when a large file is inserted, but I recognize that the number of files itself scales quite a lot, and I would like to ask you if you have any assumptions about that. These are the three.

Secretariat (Nakano): Mr. Hasui , please.

FRAIM Co., Ltd. (Mr. Miyasaka): . I would like to answer your question.
The first is where we expect to use it. With regard to this, first of all, the purpose is the accuracy of the data.
We are proceeding with the recognition that this is the top priority. It is the perspective of how to manage the accurate master data of the law. I think this is very important. The scope is the post-integration text at the time when the promulgation was actually determined from the business flow at the time of planning to some extent. However, since the enforcement date has not been determined yet at the time of promulgation, the order of multiple pattern enforcement dates may be considered, so the management of the post-integration text for each pattern is included in the scope.
I believe that the Git architecture mentioned earlier is within the scope of the first question. I believe that there are benefits to using the Git architecture as you said, and there are various existing things that can be used. However, I believe that the current situation is diverging from the standard approach. There are (1) and (2) in (3) on the page shown on the screen, and in (1), it is a method that is close to the so-called standard form. I have just tried a method similar to Git management used in software development, but the Issue point for that is (3) on page 29. There are many parts that are different from software development. There are things that are difficult to express, such as the inability to mechanically resolve dependencies between law, the modification of time series, the modification of future integration clauses, and the fact that the order of enforcement dates has not been determined even at the stage of promulgation. Now, from the perspective that management is possible with Git, from a conceptual perspective, it is not necessarily limited to Git, but it is possible to express it in a tree structure such as file and folder management. This is the basis of one design.
In addition, in terms of performance and other aspects, if it is a file format, Git can be used, but I believe that there is a possibility of considering whether it is appropriate, or whether it is a combination of using something like RDB in terms of performance, as I mentioned earlier. The fact that it is not necessary to be bound by the standard that you pointed out has been discussed in this Sub-Committee.
I answered the first question, but is this point okay? Was it answered?

Yasuno Member: .
In addition, since there is one part that is still unknown, I would like to ask you about the reason why you want to do it on a file system. What is the reason why you want to do it on a file system?

FRAIM Co., Ltd. (Mr. Miyasaka): First of all, if it is such a file system, one of the elements is that it can utilize the existing functionality of Git. On the contrary, it can be treated as such because it is a file. However, I think that detailed examination is necessary to determine whether it is unreasonable.

Yasuno Member: I see. In that sense, if you don't know how much benefit you can get from using Git, is it correct to understand that you don't need to focus on the file system so much?

FRAIM Co., Ltd. (Mr. Miyasaka): That's right. My recognition is that it is.

Yasuno Member: Thank you very much. I understand the first point.

FRAIM Co., Ltd. (Mr. Miyasaka): .
Regarding the second point, I would like to ask if it is assumed or not that the management of the articles after integration will be managed in the repository, but I would like to ask if I could give an explanation and answer from Yamamoto, who is conducting the design. Yamamoto-san, could you please?

Oxygen (Mr. Yamamoto): Certainly.
Nice to meet you. My name is Yamamoto from Oxygen, and I have been recommissioned by Daiichi Hoki Co., Ltd. for this project.
Thank you for your question. It is an intermediate product. We have considered managing the text outside of Git in the file, but we are first managing it inside the Git repository. We can consider managing it outside, but is it correct to understand that the reason why we are using it is that we are assuming the sense that it is better to manage the text after integration, for example, in the form of Git artifacts?

Yasuno Member: means that it is assumed that the data is not managed by the repository, and if there is the original data and the revised data to be merged, can the merged data be generated? I thought that it is not necessary to store both the subsequent data and the previous data in the repository.

Oxygen (Mr. Yamamoto): .
In that case, I think the prerequisites for the entire PoC are probably greatly involved, and the reason why it is managed in the repository is that the original concept was to manage the text after integration as data in the first place. In that sense, it is assumed that the text after integration is managed in a visible form in advance, and after the content of the edited file is confirmed, it will be advanced to the next (examination) stage. Therefore, I think there are two approaches. I think there are two ways, one is to generate the text after integration later, and the other is to create the text after integration and generate the revised text later. This time, since the latter concept is adopted, we are taking the method of managing the text after integration in the repository.
Have you answered here?

Yasuno Member: I see. Thank you.

Oxygen (Mr. Yamamoto): .

FRAIM Co., Ltd. (Mr. Miyasaka): The third point of view is where the performance aspect will come out. First of all, what you can imagine is, for example, consistency checks that are considered by various editors for data managed by Git, or the processing system of the system. When you run such things, you have to look at the meta information in the file. Therefore, it is usually indexed and efficiency by RDB, and the performance of such things can be Issue. It is possible to think about such things when the processing directly looks at the data here.
In addition, I am writing about the fact that a large number of files will be placed in the first repository, and I am concerned that it will take time to coordinate between repositories and synchronize data. For example, in the case of laws, in the repository for laws, all the amended laws and amendments at each point in time, as well as the undetermined patterns that occur in each, and such files will be managed. In addition, the current situation in design is that the size of one repository will be quite large, and there will be problems with the synchronization and cooperation between repositories. I believe these are the two points of concern.
Yamamoto-san, if there is any supplement, please point it out.

Oxygen (Mr. Yamamoto): .
As a supplement, I believe that there is a concern about performance when data edited for each organization is merged into a central repository. Different repositories and various repositories are managed between organization, and when new data is linked for review, it is a concern whether the merge operation will be completed in time.
Have you been informed of the situation?

FRAIM Co., Ltd. (Mr. Miyasaka): This is quite difficult without a drawing.

Oxygen (Mr. Yamamoto): That's right. When we collaborate data for the central office at the stage where we internally examine the files edited in organization, there are concerns about performance. The background is, as I explained from Miyasaka, there are an extremely large number of files in one repository, and of course there are a large number of files, so I think the number of commits will increase significantly, but the performance will gradually decrease depending on the number of commits. I think these two are concerns about performance.
Have you answered here?

Yasuno Member: I think I could probably understand. However, based on what you've just told me, I have a feeling that there are only a few files multiplied by a few minutes of law, and I have a feeling that it will be fine if it is only that many, but I think this is a place where I will try various testing.

Oxygen (Mr. Yamamoto): .

Secretariat (Nakano): Mr. Hasui .
Next, Member Yagita, please.

Yagita Member: Thank you, . This is Yagita from Legalscape.
First of all, I would like to express my gratitude to the people of Japan for the efforts that are very meaningful as a whole and that will specifically DX the legal affairs of this country. Among them, the point that I wanted to ask as an engineer is close to the point that Mr. Yasuno said, but I think it is something like Git on pages 27 to 31. As you said earlier, I understand that it may become a database that supports the entire Japanese legal affairs in the future, but whether Git will be adopted there, or whether something will be development in the first place, and if Git will be adopted, how will it be adopted? I think this is an extremely big decision and issue that will take decades to come. While raising issues around this, I would like to ask you abstractly, if something other than Git is to be newly created, what percentage will be the reinvention of the wheel, and what percentage will be difficult to create with Git, which is unique to legal affairs?
For example, I think it is quite difficult to use Git directly or to issue Git commands to those who are working on legislation. So, for example, I think it is possible to development a tool that wraps Git while using it internally. If so, the wheel reinvention part in terms of version management will be quite small. As you know, Git is the most used version control system in the world, with hundreds of millions of people using it, and it is maintained by various people around the world, so I thought it would be possible to reuse it.
The performance concerns that you mentioned earlier may overlap with what Mr. Yasuno said, but I believe that there is no problem in terms of performance because Git manages code of a size such as the Linux kernel, which has 10 million or 50 million lines of code. Going back to your question, I would appreciate it if you could give us your sense of how unique legal affairs are and how many tools that can be wrapped in Git cannot be handled at all.

Secretariat (Nakano): Mr. Hasui , may I ask a question?

FRAIM Co., Ltd. (Mr. Miyasaka): .
Thank you, Mr. Yagita. It is quite difficult, but first of all, as Mr. Yasuno mentioned earlier, we are taking on the challenge of using the text after integration this time, but depending on whether we regard it as positive or the revised text of the revised law or that side as positive, I am assuming that the design and the way of thinking will change greatly in terms of assembling a database of law over the long term.
However, in terms of the question in this PoC, when the amended law, that is, the article to be disclosed in e-Gov, is considered to be positive in a sense, it is difficult to answer in terms of percentage, although it is about how much is new or existing. We are currently working on creating a set of data that can be stored as data, and we are actually constructing such a relationship if we represent it as a folder or as a class diagram. The concept of whether it is Git or not is done at a higher level of abstraction than that, and we are considering whether it should be expressed in Git or RDB. That is the premise.
On top of that, I don't think it is possible just with the Git mechanism. Even if one merge is taken, when the text is regarded as positive after merging, when this text and this text are merged, for example, when a conflict occurs and manual correction is required, there is a conflict in the work flow, such as who will do the work in the first place and who will confirm the merge. The ratio depends on whether or not it is excluded from the consideration of the system.
Or I think it will depend on whether you can automate it completely by systematization, whether you consider development, a convenient merge tool like Git wrap, or whether you include such scope.
On the other hand, what kind of things should be managed in law as a whole, and what kind of data is there in law, the revised law, and other various things as a whole? The overall feeling is that the resolution has been improved considerably in this PoC, so I am very sorry that I cannot answer direct questions, but Git may be difficult to generalize. This is the first point.
First of all, that is the answer to your first question. What do you think?

Yagita Member: Thank you, .
I asked an abstract question, but I think this will be a rather large decision making in the future, so I asked you once, including issues raised. Thank you very much.

FRAIM Co., Ltd. (Mr. Miyasaka): .

Member Yasuno (chat statement): that it is a rather big decision. I think the data structure part is the part that is hard to go back and has a big impact.

Secretariat (Nakano): Mr. Hasui , member, and then Mr. Tsunoda, member, please.

FUJIWARA Member: FUJIWARA.
Thank you for your presentation today. I was asking you because I thought it was very interesting or difficult, but I would like to ask you about the first part, the business analysis of legal affairs. Basically, you are supposed to conduct a fact-finding survey on the business flow and make a proposal for improvement. On the other hand, when you are making a prototype of an editor, you can only do experiments based on the current flow. I wonder how much radical change will be proposed regarding the improvement of the current business flow, and what part of the editor will be dug up in depth depending on it. I think there is digitalization in official gazettes, which I think will probably be included in the matters to be discussed by this working group or the subject of PoC. I would like to ask you how much consideration is being made on the premise of changing the business flow, including that area, and how it feels, although this is also an abstract question.
That's all.

Secretariat (Nakano): Mr. Hasui , may I ask a question?

Dai-ichi Hoki Co., Ltd. (Mr. Umemaki): .
Thank you for your question. As I explained today, we are identifying various Issue through the workflow of legal affairs of each central government agency and hearings, and exploring what kind of work flow improvement is possible from there. At the same time, the system is also at the stage of considering a prototype and advancing it in parallel. It is quite difficult to give a full answer to the current answer, but at this stage, I think that we should carefully analyze the story told in the hearings and form what kind of work flow improvement is possible.

FUJIWARA Member: I understand.
As usual, the indentation of vertical writing has been talked about since the beginning of this conference from long ago, and I think that changing that point will change quite a lot, and I personally think that changing such a point is meaningful, so please consider it.
That's all.

Dai-ichi Hoki Co., Ltd. (Mr. Umemaki): I think you are right. Thank you.

Secretariat (Nakano): Mr. Hasui .
I would like to add that this PoC is being implemented without any prejudgment, and a fundamental review is being made in an exploratory validation without assuming the existing workflow, which is why some parts are difficult. However, in relation to official gazettes digitization, as you are participating today, we are working in cooperation with the initiative to renovate the system of the National Printing Bureau. It is difficult to say how much the business flow is being changed, but I would like to add that it is being considered from a zero base. Thank you for your valuable comments.

FUJIWARA Member: Thank you very much.

Secretariat (Nakano): Mr. Hasui Next, Mr. Tsunoda, may I ask you a question?

Tsunoda Member: .
I would like to ask a question or comment. I have been wondering about the architecture since I first saw this document. It would be wrong to use Git first. I thought that if the central government does not clarify specific functions step by step from the perspective of requirements definition or what is necessary for legislation in the first place, there may be a discrepancy between what should be aimed at and what should be aimed at in the end. However, in the talks today by Prof. Yasuno, Prof. Yagita, and others, it seemed that we would try to prevent this from happening, so I am relieved. Even so, if we talk based on engineers, the feelings of the development side will be strengthened, and it is likely that we will be dragged down by proposals from the perspective of the development side. Therefore, what is important is what the users and legislators want to do, and I would like to ask them to make a validation by saying, "It matches well with that." I thought that it is necessary to consider not a bottom-up discussion based on technology, such as "Now, we already have such technology, so it can be used here." I think that it is okay because you have considered it, but I would like to confirm it.
Another point is about the part about development in the editor. I had an opportunity to talk with the central government before creating e-LAWS, but some of the central government officials who gathered at that time expressed skepticism about the uniform system for creating revised documents because they have their own way of doing it. Whether or not they actually use it is another matter, and some people don't want to use it. I had the impression that there were a variety of people, so I think it would be difficult for Mr. Miyasaka and others alone to determine how widely hearings are being held and how to coordinate this. There are places where we have to do this from the top down, so I felt that it would be necessary to form a consensus not only with Mr. Daiichi Hoki and Mr. FRIM, but also with the government, including the people of Digital Agency, and the users who place orders. I don't mean to say this to anyone, but I would like to point out this point to the engineers and Mr. Nakano.
That's all.

Secretariat (Nakano): Mr. Hasui Thank you very much.
Mr. Miyasaka, is there anything I can do for you?

FRAIM Co., Ltd. (Mr. Miyasaka): .
Thank you for pointing out the first point about Git. I think you are exactly right, and I think it is the selection of tools and technologies as a result of doing what should be really achieved so that the means and purpose of the tools used are not reversed. I think your point is reasonable. Thank you.
As for the editor, as you said, there are quite a lot of different opinions in the hearings. For example, there are people from Ministry of Finance who said that they would make it from a revised text. On the contrary, there are other ministries and agencies who said that they would make it from new and old. Even just that, the design of the system is different. So, I wonder which one should be made. If something that can achieve everything is quite versatile, I think it will end up being a story in which something that can do anything can't do anything. As for the drafting of laws, there are from very simple to very difficult and complex ones, and when we start to think about covering complex things, then it will gradually become complex, and even simple things will be difficult to use. Even if we take the size of law, it is completely different, and when we assemble it into one system, which user should we focus on, if we don't have a certain concept, it will be difficult to use, and I think there are difficult points. I think you are absolutely right. Thank you very much.

Tsunoda Member: user side, or rather the Digital Agency side of the central government, could say something about it, and I thought it would be good to discuss it at this meeting.

Secretariat (Nakano): Mr. Hasui .
I would like to add a little. We are conducting hearings on various forms of legal reforms, such as bundling laws and laws that amend a single law, including those who are in charge of tax-related legal reforms in Ministry of Finance. On the other hand, even in the department in charge of the Digital Legal Working Group in Digital Agency, I have never done programming, but I am actually using Git and trying how it works. It may depend on what kind of interface it will be, but as members of Yasuno have pointed out that it is a large-scale decision making, and the data structure part is difficult to go back and has a large impact, this is exactly the case. A mechanism for updating law data is finally being constructed now, but if it is to be drastically changed, there are various problems about whether it will be worked on in the first place, and I myself think there are parts that require careful consideration, and I hope to continue to make validation. Thank you very much for your valuable comments.

Tsunoda Member: .

Secretariat (Nakano): Mr. Hasui Next, I would like to move on to agenda 2. Mr. Tsunoda and Dr. Kano of Shizuoka University will explain their research and perspectives on AI and law in about 15 minutes each. First, Mr. Tsunoda and Dr. Kano will explain in that order, and I would appreciate if you could summarize your opinions and questions on this matter at the end.
So, Mr. Tsunoda, thank you very much for your continued support.

Tsunoda Member: Hello. This is Tsunoda.
Since you don't have time, I will make a presentation.
Today, I would like to talk about it under the title of "Technical Issue and Prospects of AI Use in the Field of Law," in the order of the table of contents. First of all, regarding the purpose of today's third slide, although there is also a viewpoint like the previous ones, as it is written in orange at the bottom of the slide, it is better to share it from the viewpoint of not repeating similar past failures when looking ahead to the realization of things such as simulations in law and Rules as Code and RaC, which have appeared several times in the meetings in Digital Agency. Here, it is written as "legal AI," but I think it is okay to roughly understand it as AI in the field of law. Regarding the story of AI on the fourth slide, I think most of the participants in today's meeting are already familiar with this topic, so I would like to omit it, but I would like to confirm it just in case. Even AI is software, so in what ways is it different from ordinary software? For example, in the past, AI based on knowledge was called knowledge-based AI, and since most AI is rule-based, rules were written in the form of logical expressions, and knowledge for each domain was input into the "inference engine" that was the core part of AI at the time, and answers were obtained for various questions in the domain. Knowledge was used by substituting it according to the domain. Since the 21st century began, learning models have been created from big data using machine learning, so even if people do not write the knowledge, if they can use the learning model according to the domain well, smart answers can be output to problems. However, it has become a hot topic and a research theme in international conferences and elsewhere in AI recently that learning models are not so easy for people to see.
However, as shown in the fifth slide, in the field of legal AI, various things have been proposed and development has been carried out along with the progress of AI technology from quite a long time ago, and various studies on legal AI have been developed in the form of AI technology at each point in time. However, in the field of legal AI, practical AI did not appear until after 2000, but thanks to the improvement of AI performance centered on text processing represented by the term "LegalTech" in recent years, practical AI has appeared mainly in the United States, and we are finally in a slightly happy state.
The previous step, "legal inference," was the mainstream theme of legal AI. In the sixth and seventh slides, I talked about a model that is the premise of the mechanism in the era when "legal inference" was the mainstream theme of legal AI. The reason why I would like to talk about this is that when it comes to Rules as Code, it ends up being a matter of how to apply rules written in code. In that case, there is a possibility that the old rule-based method will be reviewed, so I thought I would like to review it for now. I don't have time today, so I would like to focus on the legal inference form of the countries such as Japanese, German, and French countries with statutes, which is based on the principle that when there is a provision of a law, a judgment is made by applying the provision in a court. In most cases, the provision of a law is written in the form of a rule that if a conditional part is established, it becomes a conclusion part. In other words, if a legal requirement is established, a legal effect is established. If the case matches the conditional part, the effect part is determined as a judgment. It is applied in the form of a deductive syllogism. When you enter the Faculty of Law, the term "legal syllogism" often appears. In the nineteen seventy era, the 80s and the 90s, research continued on whether such a model can be simulated by a computer as it is, and there are teachers who are still using this approach, but this is the base of the calculation model.
However, as you can see on slide 8, even if the syllogism is in the form of a syllogism, even if the statement of facts used in it is correct, if the rules used are incomplete, it will not work. The universal rules written at the top are mostly definitional. If it is "People are creatures," or if it is something like the axioms of mathematics, it will work, but it is not so. The rules used in law, for example, even in law, if there is a rule that says, "If you kill a person, you will be sentenced to death," it is said that the rule is in the second form, and the illegality of the death penalty is denied. However, if it is self-defense, it is not illegal, so the rule is not applied in the end. In addition to this, probabilistic rules are often used, but these rules do not make it as if "people" are inside "creatures" as depicted in the figure on the left below, and the "killer" in the figure inside does not fit inside "those who will be sentenced to death," and the orange part always comes out. Therefore, it does not always work in the end.
In addition, there is a problem with what was called legal reasoning. As shown in slide 9, there are many exceptions that I just mentioned. In addition, there are problems when revising the articles and laws that were mentioned earlier. It is not so easy to make a revised sentence, it is not possible to maintain the situation, and it is not possible to simulate the situation well with a computer. In law, higher-order expressions are used as a matter of course, and it is natural to refer to other articles and laws, but it also has the side effect of adding operations to them. Or in the form of an expression such as "to do" in a phrase, higher-order expressions such as treating a thing that can be a proposition as an object that is an element of a text often appear, and higher-order responses must be made. Or ordinary propositions are written in fact, but in the case of law, it is naturally necessary to incorporate logic in the field of normative logic and obligatory logic such as "must do", "must not do", or "may do" into the mechanism of inference. I can do this if I want to, but it is a little troublesome.
Also, legal philosophers say "open texture," but the concepts that appear in various laws and the concepts that appear in articles are not exactly defined in laws, but in the end, they depend on the interpreter. Even if you read the articles of definition, the meaning of each word that makes up the definition is mixed with context-dependency and diversity of interpretation, and in the first place, in the case of laws, it is often made so that it can be interpreted iridescent. Or it really depends on interpretation, and it is often made to depend on context and interpretation in the form of leaving it to the head. The term open texture here refers to this situation.
In addition to such provisions and rules, most descriptions of facts are descriptions of social facts, so it is not a matter of physically breaking something or where something is located. For example, even if the action of a person giving money to a person can be physically observed, the fact of whether it was given or lent, such as whether it was transferred or lent, is a social fact that cannot be inferred from the objective action alone, so there is an aspect that it is difficult to understand unless a lot of contextual information can be evaluated and interpreted. In the end, the troublesome phenomenon that interpretation and human recognition enter into it is a factor that makes legal inference difficult.
Of course, analogies are often used in the legal field, so it is difficult to deal with such high-order inference.
Then, there is a question about whether current AI can do something about these problems. In the first place, current AI does not make arguments in most cases, so there is no case that the explanation is written in words that can be understood by people in the learning model. In that case, when considering the use of AI, such as simulation of legal things, it is very difficult to explain. Unless we overcome this difficulty, even current AI will be difficult to use. On the other hand, in the old approach of legal reasoning, all the proofs are guaranteed once the conclusion is made, so whether or not it is difficult to read is logically explained as a separate issue.
For this reason, as shown in Slide 10, many legal AI scholars have been away from the study of legal reasoning since around 1990. They have been actively studying how to mathematically formalize the situation when a person and a person argue in an argument by setting the legal reasoning to be performed by a person rather than by AI in a "legal dispute," and how to support the argument. This has continued to this day. This is not likely to be directly used in Rules as Code or other such places, so I will not touch on it today, but when viewed broadly as legislative activities as a whole, it seems to be a technology that will be effectively used in the future in such places as the collection of legislative facts, communication between systems and design, and consensus building.
This 11th slide, which I just attached just in case, pointed out that when trying to represent knowledge of law or something like that, using logical expressions is often the most typical means when considering mathematical stability. However, even when using logical expressions, the problems that I mentioned earlier are not solved at all, and the biggest problem is, as I wrote on the lower right, there is a really big problem that needs to be created by hand, and it is troublesome, and it is a bad problem unlike current machine learning.
And while this is assumed, as I stated at the top of the characteristics on slide 12, even if simulations are conducted on a rule-based basis, most of the rules of law are constraint rules, so there is a problem that these rules are not enough. That is because most of the actions of human beings are based on the principle of private autonomy in legal terms, and basically they can be done freely without permission. On top of that, only restrictions such as "You cannot do this kind of action," or "Please act within this range" are written as rules of law. In that case, it may be possible to imagine it as a simulation of law, but in reality, almost no rules for generative systems are written in law. Therefore, it is pointed out that when simulations are conducted, it is quite difficult because there is arbitrariness of development, various ad-hoc rules, rules that do not go through regular procedures, and rules whose consistency is unknown.
Slides 13 and 14 are about what is called a legal ontology. An ontology is a legal concept in the text of a law that can be treated computationally. Of course, it is necessary to deal with legal concepts by making the structure of the concept in a format that can be cleanly read by a computer or in a format that can be used by various applications and various AIs, but this is also a problem.
The ontology here is different from the philosophical ontology that originally appeared in the first half of the 1980s. At that time, AI was based on knowledge, so everyone was writing knowledge on their own, and it turned out that they could not use each other. On the other hand, if the conceptual structure could be described as an objective entity such as a physical object, it would be convenient to share it with everyone.
However, when this happens, it would be fine if it were physical, but since legal concepts are not intended for physical existence in the first place, there are parts that deviate from the original intention of the ontology or conflict with it. In the current way of thinking about ontology, ontology is a specification of a concept, and it is considered that the meaning of ontology will come out by starting from it and utilizing it. Therefore, there was an effort to build a legal ontology. However, in order to make good use of it, it is thought that the key point will be standardization. In order to standardize it, for example, in the field of justice, it is a major premise in court that a plaintiff and a defendant conflict with the same concept with different interpretations. In that case, even if it is a common understanding, it is difficult to talk about the real field of justice. However, what we are trying to discuss here is not the judiciary, but the ontology of the legislative side or how to describe and accumulate knowledge on the legislative side. When a law is created, it is usual for everyone to agree with each other to some extent, and then make it common. Therefore, I think that standardization seems to be easier in the legislative field than in the judicial field. In other words, how a created law is interpreted may be divided into various interpretations, so standardization may be difficult, but there is a faint expectation that it will be possible to make it common and standardized when it is created.
On the last slide, I summarize the Issue and recommendations. As I wrote on the left side of the slide, although there are various difficult things, as you all know, major technological innovations are coming, so I think it is natural to utilize them. I don't have time, so I will just talk about the red letters in the approach or recommendation on the right side of the slide. I will point out that it is better to proceed with the application of AI in the legislative field by limiting the scope of the application, as if you are doing it now. At that time, as we discussed earlier, it is better to organize the tasks and use cases properly, because if the needs of the field, such as specific tasks and use cases, are first determined, and the technology is not applied to suit them, it may cause divergence, failure, or failure to show the completed product at the end.
In addition, it is not good to proceed with legislative AI introduction efforts in a piecemeal manner, so it is necessary to work for overall optimization, but an important point that tends to be missed here is the time perspective. Looking at various DX situations, we often see that optimization is performed for a certain period of time, which is called overall optimization, but optimization over a long period of time is not performed. Therefore, considering the time perspective, I strongly feel that a system for constant efforts is necessary. I would like to propose that not only optimization over a period of time, but also a consistent system over a long period of time is necessary. As I said earlier, spatial and temporal optimization is also difficult without promoting standardization, so I think it is necessary to standardize by focusing on common recognition and intention.
I don't have much time, so I'll call it a day. Thank you for your attention.

Secretariat (Nakano): Mr. Hasui Thank you very much.
Mr. Kano, thank you for your continued support.

Associate Professor Kano: Nice to meet you. Let's get started. Hello, I'm Kano from Shizuoka University. Thank you for your continued support.
Today, I had a 15-minute talk, and I was quite at a loss what to make of it. However, after listening to Mr. Tsunoda's talk, I thought it would be better to retouch it a little, so I actually made some subtle additions to the slides I handed out, but I would like to introduce it quickly.
As for your talk, I thought you wanted a vision, and I wanted to talk about it so that you could make a quotation in the future if possible. So, I prepared a broader story. The title is "Generative AI and NLP Research." I am a researcher specializing in NLP, and I use so-called words. You can understand that I am a person who makes things like ChatGPT. That is what I am doing.
However, there are many actual themes, and there are many ongoing themes, but there are also fundamental aspects of language processing, and in terms of applications, in addition to law, there are various fields such as politics and medical care. Although they seem to be different from each other, they are the same in origin, so in that sense, I would like to introduce some of them and introduce my vision.
However, I only have 15 minutes, so I thought I would explain these words, but it will take only 15 minutes, so if you are interested, I attached them to the appendix. If you make another opportunity, I can explain them to you. When I visited you earlier, I said that you must know all of them, so I omitted them. I am sorry if you do not understand the words, but I think you can understand the story even if you do not understand them a little, but if you ask me questions, I will be able to answer and explain them. Therefore, I will use these terms and words related to machine learning without explanation.
As for the future concept of this project, which you mentioned earlier, I am working with you to prepare a document summarizing the future story, so I am very aware of the situation, and there was a roadmap like this in it. Thinking about what will happen to this in the future, I think it is necessary to know how advanced the technology is now and how much the previous one can be used. I would like to talk about this.
First, we are doing a lot of applied themes in medical care, and one of them is automatic diagnostic support using electronic medical records. We are doing this in a joint research project because a large amount of electronic medical records are available in hospitals, even though they are in personal data. Now, we are using the data to automatically optimize the action of anticancer drugs. In other words, we are using machine learning to predict which anticancer drugs should be used for which patients to expect the best effect with the fewest side effects.
This is due to language processing, but in many cases, numerical data is more effective or contributes to output. In fact, in this case, there is inspection data, so of course, inspection data is very effective, but in addition, where the doctor focused on is written in the electronic medical record, so language processing is important.
What is really difficult is the method of predicting by using both numbers and letters by superimposing the two. It is quite difficult, and I am currently trying to do it. In this case, compared to legal systems, the data is written in a more colloquial form, so there is a problem of how to process what is written in so-called broken language, and there is also a problem of personal data conservation, so it is difficult in various ways.
In addition, there is a superposition with the numbers I talked about earlier, and this sometimes happens in law, but if you are not careful about handling them, there are many cases where treating numbers as letters does not work. Therefore, this is a place where various ingenuity is required.
We have another medical care system. It has been performing automatic diagnosis of mental diseases for a long time. There are so-called five major diseases, depression, bipolar, anxiety, dementia, and schizophrenia. We have recruited subjects from these patients and a group of healthy people. We have recorded their conversations with them, and the diagnosis is automatically made from the conversations. It is quite successful. We are more conversational, so you may think that it is far from the law, but if in the future, for example, court communication will be the target, it is colloquial, so I think the same thing will happen.
Another thing is that we are also conducting research on applying this to Twitter, which is now X, and SNS. We are trying to make a similar decision from SNS posts, and we have a large amount of data, so we did the same thing, although there is some noise. This is also a level that can be used practically, but it has a common point with law. Even in medical care, how to use the results is always a problem, and in Japanese culture, if people cannot take responsibility, they do not want to use them or cannot use them. In this case, doctors are there, so the current situation is that we have no choice but to stick to the support system of doctors. Probably, the same thing will happen in law, and I think Issue will determine how much automation can be issued.
This is about how much so-called generative AI can be used. We are trying it, but I think there is a possibility that it cannot be used very much. There is a high possibility that colloquial data is not included in the training data, and as Dr. Tsunoda mentioned earlier, there are parts where it is difficult to explain the inference processes in the middle, so it is not very suitable. And both are in personal data, and cannot be sent to ChatGPT.
One more thing, this is the main topic, and it will be the core of the legal story this time. Since legal stories have logic, it is called entailment in technical terms. For example, when there are the words "I am a human being" and "I am an animal," being a human includes being an animal. Such a thing is called an entailment relationship.
The determination of whether a sentence pair is an entailment, a contradiction, or a neutral is considered to be the core of all logical varieties.
For example, in the project called "Higashi Robo," which I participated in before, questions such as automatic answers in social studies can be answered because if you know whether the content of the textbook implies the question sentence, you can mark it with maru-batsu. There is an automatic answer for the bar examination that I will introduce later, which I am still working on, and I have been working with Mr. Tsuyoshi Sato of National Institute of Informatics for a long time, and I am doing the language processing part of it, and I am automatically answering the civil law short answer questions of the bar examination. In the same way, it is enough to know whether the legal sentence implies the question sentence.
One application of this is, I will introduce another case earlier, but we are doing automatic estimation of public opinion on the Internet. There is a flood of fake news in the world, so we are thinking that it would be good if it could be automatically found. In fact, it is impossible to know whether it is fake or not unless you go to the site. For example, only people on the site can know what time and where the Prime Minister was, but it is difficult to do so. So, we take a step back and try to automatically determine whether a person is making a contradictory statement or a statement that implies another person. Then, if the same person is making a contradictory statement, it is highly possible that he or she is lying, has changed his or her mind, or is saying something strange. If we accumulate such things, we can create a grouping in which these people are saying the same thing and these people are saying contradictory things, and we are thinking that the flow of discussion can be visualization. Then, we can see that there is a possibility of fake news, and as a criterion for this, we can see the flow of information from where it came. Therefore, if we can see the implication and contradiction, it can be used not only for legal logic but also for various things. By the way, I say this case is an application of political science, but public opinion is almost created on the network and the Internet, so we are trying to find out the future flow of public opinion.
In order to do so, I think we need to know the attributes of the users, what kind of people ingest what kind of information, and how they react to it. In other words, it is necessary to know whether they retweet, change their opinions, or not change their opinions. In order to automatically know what kind of attributes they have, we conduct an attribute survey of Internet users through crowdsourcing, and use machine learning to guess based on this. Then, for example, if we know that they are diplomatic or of this age group, if we can find out whether they retweet and spread this kind of story, we can find out the overall flow of public opinion. Therefore, I think you understand that it is necessary to know the relationship between the sentences.
My main topic is law. For a long time, I have been working with Professor Tsuyoshi Sato on COLIEE. This is an international competition held every year, in which everyone openly participates in solving a problem of automatic processing of legal documents, creates an automatic system, and competes their performances. This task of competing performances is often performed in the field of artificial intelligence, and this is one of them. We are working on this as organizers and participants. There are two main tasks. One is case law, using case law of the Federal Court of Canada, and the other is written law, used in the short answer method of the Civil Code for the National Bar Examination in Japan. Today, I would like to explain mainly the second half.
If you successfully reconstruct the Civil Code Short Answer questions for the bar examination, you can break down the question into a form in which there is a relevant article for the question sentence, and you can answer whether or not the question sentence and the relevant article are in an entailment relationship by using Maru-Batsu. You can pose this question, have the machine automatically solve it, and compete their performance. Currently, although the number of past questions is limited, about 1,000 past questions can be used, and every year, about 100 questions from the latest year are posed and have the machine solve them. By the way, since it is international, we also provide data that has been translated from Japanese to English by humans. For example, in the first half of the Civil Code Short Answer, Question 14, etc., when you give this sentence, you give the relevant article at the same time, and a human expert will tell you which one is the relevant article, but you can determine whether or not these two pairs are in an entailment relationship.
Of course, it is legal if the same thing as the law article is written, so it is a circle, but usually, of course, it is different. This example is a simple case, and the words are quite common, so it is easy to compare. Usually, the characters and words are different, so the law is written in an abstract manner, but the actual case is written in a more specific way, such as the name of an individual, Mr. Sato, or the name of a company, or where Mr. So-and-so was one day and stabbed him with a knife. I think it is a quite difficult problem because one level of abstraction is necessary to link them. That is the task.
This is a slide that you do not have at hand, but we have actually done such a thing with a rule base, and it has been in a form that can be explained using the results of linguistic processing. In this case, for example, you can take a pair of a subject, a predicate, and an object and check whether they are the same or not, but that alone is quite difficult, for example, it is written here as "having right to rescind." This and "can be canceled" are the same word, and they are supposed to have the same meaning, but they look different in terms of word composition, so it is difficult to know that they are the same unless you process them properly. By that time, the differences in the wording of languages are at various levels, so I think you can understand that it is quite difficult to absorb them.
This is not in the slides I handed out, but I added it after listening to Mr. Tsunoda's talk earlier. Mr. Tsuyoshi Sato has been creating a language called PROLEG for a long time, and he says that if it is in this form, we can even issue a judgment. I think you can think of it as a developed version of the language called Prolog, a legal version. The problem is that it is said that the role of a language processor is to convert the Japanese language written above into the Japanese language written below. That is true, but in reality, this is very difficult. The names in the logical form below are created by humans, and they do not match. In addition, there are many things that are not written, so they have to be filled in, which is quite difficult. For example, in this case, it is a story of promotion payment, but I think it is not difficult to imagine that it is quite difficult to understand whether promotion payment is understood if it is written as "return when promoted" or whether it is an indefinite fixed-term contract. In this way, even where it seems to be easy to do, the language processing part is difficult, and it is difficult to connect.
This is another slide that I don't have at hand, but this project itself has trial automation as a kind of final goal, so it would be nice if all of them are connected in the end and the judgment is issued. I think it is the language processing part that is difficult, for example, the part that determines the facts from the evidence. Or, as it was just now, there are various levels of abstraction in the part that applies the elements of the case to the legal document, so I hope you can understand a little bit that it is really difficult.
For this reason, we have been doing this every year. For the latest task, we reported the results in June 2023. Unfortunately, the so-called generative AI had the best performance. However, as Mr. Tsunoda said, the problem is whether these systems are actually performing logical inference in them. It is possible that the training data actually contains something similar to the answer, and it was found out. In that case, of course, it can be solved, and in practical terms, it may be fine, but what we want to do is to ask them to make logical decisions from scratch, so we cannot say that they were able to do it. So, what should we do?
In addition, in order to use it in the field, it is necessary to explain the grounds and processes in the middle. It is called so-called explainable AI. So-called generative AI, which I have not explained today, is a mechanism in which everything is accumulated by predicting the next word to create a sentence. Therefore, if I ask you to explain the processes in the middle, they will be output, but the output will probably be different from the actual processes. We believe that the next step is a new task design to create an explainable AI and its evaluation, and we are just about to design it.
Therefore, I think it is necessary to consider the possibility that it is not built from scratch just because it has high performance. In the case of generative AI, even if it seems to be able to do logical things, it seems to be able to pull existing logical results well and solve them by shallow superposition. Instead, I think it can be done because it knows a lot of things, so I don't know if it is good or not. However, perhaps most of the problems in the world can be solved by similar things that we have seen somewhere. Then, it can be solved, but I think it depends on what the actual society is like and whether the users think it is good or not, but we want to make it possible to do logical processes.
Finally, I would like to introduce another example that seems to be difficult for generative AI. This is another project that I have been working on with my colleagues called Werewolf Intelligence for a long time. Do you know a conversation game called Werewolf? All young people know it, so if they don't know it, they are not young people. It is a game to detect a liar in conversation. One or two players in the game are set up to be mixed with a liar. They pretend not to be a liar, but they are called Werewolf, and they try to deceive people with it, so it is a game to detect people only in conversation. This is a fairly advanced game. For example, if you want to tell a lie, you and the person you want to show will live together. In addition, if you think about how it looks from the outside, Mr. A thinks he is a liar, Mr. B does not think so, and then, Mr. B is looking at Mr. A who thinks he is a liar, and if so, various combinations will increase explosively. Thinking about the possibility of such various combinations is probably difficult with current generative AI. I don't know because I am still in the middle of research, but as far as I have tried, it seems difficult. In the first place, it is difficult to make him tell a lie, so what to do about it will be one of the Issue in the future.
Actually, this is also the latest, and this is also open participation every year at Con testing, and we make automatic players to play against each other. As a result of generative AI such as ChatGPT, conversations have become very clean. Vocabulary and grammar are very fluent, and they are actually at a level where they cannot be distinguished from humans, but it seems that the complex settings and complex relationships I talked about earlier have not yet been taken. Therefore, this is considered to be the next Issue, and it may not be possible to do it by simply extending the current generative AI.
In that case, in closing, I would like to talk about the future. Since I am a language processing researcher, naturally, researchers must always aim to be the best in the world, so we cannot afford to lose to ChatGPT or existing ones. In that case, if we want to make something that exceeds that, I do not explain the content today, but I think that the heart of current LLM or generative AI is probably the instruction part. It depends greatly on how fine tuning is done here, so I would like to explore the mechanism. At present, the mechanism is too complex to understand well, but after exploring what kind of thing can be given to make it better and in what case it will be better, I think there is one thing beyond that.
Another is that we will probably reach a limit somewhere, so I myself have always wanted to create a system that is close to people. The current Transformer-based system is considered to be far from people, so I would like to create a system that is closer to people, and as a result, I would like to be able to be close to people. In that case, in the end, it is said to be "a fusion of' classical' language processing and deep learning," but it is so-called symbolic processing. Therefore, I would like to create a system that can be expressed and explained in symbols, although its performance is high, by skillfully combining such decisive processing with the current stochastic vector calculation.
I spoke to you in a rushed manner, but I feel that it has not been contained in 15 minutes somehow, but I would like to finish it once. Thank you very much. The remaining part is an appendix, so please take a look at it if you don't mind.

Secretariat (Nakano): Mr. Hasui Thank you very much.
If you have any comments or questions, please raise your hand.
Member Yasuno, please go ahead.

Yasuno Member: presentation. I was able to listen to it with great interest.
I would like to ask Professor Kano about one thing. I think he said that the current generative AI, such as ChatGPT, is only predicting the next token, and it is not clear what kind of inference it is actually doing inside. I wonder what this will be like in the future. In other words, when I see an AI that can explain its logic perfectly, I don't know what kind of structure a large neural net has inside, but there is a possibility that it is really doing something close to inference, and I don't think I can completely deny it. This is not only a neural net-based architecture, but also a different architecture. I wonder if this is an architecture that really understands and infers and produces this result, whether it is an architecture that produces quite correct answers even though it does not actually understand, and whether it is distinguishable in the first place. Is there any debate about this?

Associate Professor Kano: This is completely my research, but as far as I'm concerned, I can't do as much as the simple implication relationship I told you about. That means I think I can't do it unless I know a clue to something similar, so I think I can't do logical inference.

Yasuno Member: It was a question whether the current AI could distinguish whether it really understood the contents or whether it did not understand the contents and almost the same answer was returned when there was some complex moving system.

Associate Professor Kano: In the first place, we are talking about what it is to understand, but I think it is whether there are enough internal steps to explain the details. It is a way to test whether something is similar just because it is similar, but as I mentioned earlier, if you give a lot of information, you will understand because it is similar in some non-essential parts, but if you ask in a simple form without much information, you can say that it is probably not possible, so I talked about it earlier.

Yasuno Member: I see. I understand. Thank you.

Associate Professor Kano: However, since it is under research, it may be possible. I think there is still room to try.

Yasuno Member: .
I understand. I don't know what understanding is.

Secretariat (Nakano): Mr. Hasui .
What do you think, Mr. Yoneda?

Yoneda Member: .
Today, I could not listen to the first part due to my connection, but I would like to give my overall impression. There are people who use the system that needs to be development now, and Dr. Tsunoda touched on it a little, but what kind of output it is, and what it produces, the practitioners must manipulate the words from the system by themselves, determine them, and send them out. What comes out from here is automatically distributed to society, but I felt that the discussion based on consideration of the part that will actually be released to society is still Issue.
The pursuit of logic in machines and correct reasoning is of great value both academically and in terms of system development. However, in social implementation, in situations where we use them, whatever they are, they are used once they are in circulation, so that if we are not good at it, fakes will pass. Based on the characteristics of language in society, it is necessary to make it possible to lead to better correctness, but my impression is that I did not understand how much of this is included in the system.
In order for us to achieve results in this PoC and promote the digitalization of legal affairs, I once again felt that it is important to steadily implement the strategies that Dr. Tsunoda mentioned to start from close to home. Of course, I am not saying that we should continue to advance research on the possibility of using AI or the possibility of operating the current approach to legal affairs so that it can be smoothly reproduced by using a system, but I have the impression that the working team should make a conclusion that the nature and limits of the output should be shared by users, and how it should be incorporated into the context of the work to be discussed based on practical operations and outputs.

Associate Professor Kano: I don't know if I can answer this, but from my point of view, that's what the explanation is about, and if you just output the results, no matter what kind of person you are, you will want to ask what they are. I think it's because there's no choice but to say that there's no choice but to say that there's no choice but to say that there's no choice but to say that there's no choice but to say that there's no choice but to say that there's no choice but to say that there's no choice but to say that there's no choice but to say that there's no choice but to say that there's no choice but to say that there's no choice but to say that there's no choice but to say that there's no choice but to say that there's no choice but to say that there's no choice but to say that there's no choice but to say that there's

Yoneda Member: The explanation I gave you earlier may not have been good. In the end, what people use in society is information that comes between people, so I think it will be fine if information that can create persuasive power between people is provided. In today's discussion, it seems that we are discussing a situation in which machines and people face each other one on-one. It is the same with the creation of laws through legislation called the world of law, but I believe that communication between people will always lead to what resources can be invested and discussed. Therefore, I especially wanted a picture of how things that appear in such a place will appear in a form that can be used specifically. That is what I thought about Professor Kano.

Associate Professor Kano: I see. It has to be a support system, so in that case, I think the actual use has to be one to-one, in which humans operate the machine.
Then, it will be like this.

Yoneda Member: When you want to come to a conclusion about a legal argument, you have to argue between people, discuss the evidence, and come to a conclusion. So, the positioning of what kind of information will come out at that time was a little different from my sense. I think it is natural that there are places where we have to explore in a one to-one relationship between research machines and people.

Associate Professor Kano: I see. Thank you.

Secretariat (Nakano): Mr. Hasui Thank you very much for your valuable opinion, Mr. Yoneda.
As for your time, as I believe it is related to the contents discussed by Mr. Yoneda and Mr. Kano, in the next Agenda 3, I would like to explain in about five minutes from Yamauchi, who is in charge, that we, the staff, have actually used AI to perform legal affairs.

Secretariat (Yamauchi): Nice to meet you. I'm Yamauchi from the Secretariat.
I would like to report on the results of an experiment on legal affairs assistance using AI, etc. Please forgive me for speaking a little too fast.
The opening is a reflection, so I will omit the oral part. Please turn the page, and from page 7, this is the main subject.
Please see page 8. From here, I will explain the results of the experiment on legal affairs assistance using AI, etc. The left side is an outline of the experiment. This experiment is conducted for the purpose of sorting out the suitability of AI, etc. using LLM for legal affairs and what can be realized immediately.
This was announced in advance at the 8th meeting of the digitalization Review Team for Legal Affairs last time. The method of the experiment was in the form of an employee actually experimenting with the idea of legal affairs assistance using existing products. The experimental example was recruited from employees in Digital Agency.
On the right, an outline of the experimental results. I will introduce some specific experimental examples later, but as an outline, I found some that could be realized immediately. There were experimental examples in which materials and figures were created and ideas were output based on texts such as articles and policy outlines. This is the second point. On the other hand, there were experimental examples in which articles were generated and checks specific to law articles were performed, as some of them are considered to require further research in the medium to long term. This is the third point. In each case, when a person with experience in legal affairs checked the output results, it was also confirmed that there were multiple cases that required modification. This is the fourth point. Among the experimental examples this time, there were several experimental examples by writing code in combination with law API, and some examples that were considered to be highly practical were reported. The effectiveness of the combination of AI and data such as API and law was suggested.
On the next page, I have quoted the contents of the previous meeting on the characteristics of AI using LLM and the considerations from the viewpoint of legal affairs. I believe that these considerations require continued attention, so I have reposted them.
Starting on page 10, I would like to introduce an actual experimental example. I will make a brief introduction orally. First of all, on page 10, which I am currently displaying, I would like to talk about the background of policy consideration, which is a prerequisite for consideration of the draft articles. When drafting the law draft, including the law, it is necessary to consider the details of the policy, so I would like to list ideas in the form of so-called hypothetical questions.
Next, on page 11. In this experimental example, on the right side, the text of the Administrative Procedure Act is input, and the flow chart for determining whether the Pub Code specified by the Administrative Procedure Act is necessary or unnecessary is output. However, I would like you to be careful, but when we looked at this figure, we found that there were some Issue such as the items were not exhaustive.
However, since the figure itself is output, we believe that the work time will be shortened on the premise that an employee who can interpret the law will correct the content.
This is page 12. This is an experimental example that is considered to have a Issue that requires further research in the medium to long term. Although it is about having the LLM generate the draft articles, there were many cases of Issue, such as the simple prompt did not have the specific provisions necessary for the legal system. If the instructions are embodied or detailed in order to have the specific provisions written, it will be gradually improved, but when the prompt is refined in this way, there were cases where it was felt that it would be faster to honestly write the articles directly.
Therefore, although it may be effective as an idea issue, it is considered that further research such as devising prompts and dividing problems is necessary.
This is page 13. This is an application example of combining with the law API. I will omit the details, but this is an example in which it was confirmed that tasks based on the exact content of the article can be performed to a certain extent by combining with the law API.
This is page 14. In this example, we are experimenting with law to support example searches that are frequently performed at actual use cases planning sites. This time, we are utilizing the newly searched law API prototype, which is currently under public development. Please see "What to Search" on the upper right. For example, when you want to search for examples in which the term "Plan" is used as "Determined by the government," you enter this and click the testing button. First, the law API prototype keyword search API is automatically used to search for terms. Whether these terms are used in the specified usage, in this case, "Required by the government," is determined using LLM here. Flagged terms are automatically listed as shown in the output at the bottom. The output results need to be checked by humans from the viewpoint of legal affairs. In addition, although it is difficult to conduct a comprehensive search, it is considered to be an experimental example that can quickly list the applicable articles and reduce the burden of searching.
Please see page 15. It is summarized as further examination Issue based on the results of this experiment.
This is the first one. There was an experimental example in which a certain result was obtained. It may be useful to further consider what prompts are valid and organize them in the form of templates or manuals.
This is my second bag. Simple prompts, such as text generation, sometimes did not produce the intended output. It would be useful to conduct further research in the medium to long term, such as to what extent it is possible to respond to these by devising prompts.
This is the third point. In terms of the user interface, it is considered useful to consider a user interface that is easy to use in legal affairs and that does not easily cause misunderstanding such as it is clearly indicated that human check is necessary for the output. As discussed earlier, the output of the current LLM is very fluent. Therefore, it seems to be so at first glance, but even so, I think it is necessary to devise a user interface that allows the user, in this case, a specialist in legal affairs, to check critically.
This is the fourth point. We believe that it is useful to consider expanding the functions of the law API, and to combine it with a deterministic rule-based check program in addition to a probabilistic model. We also believe that it is useful to consider functions and APIs for performing these automatic checks, to develop APIs and law data, and to consider a legal affairs support application by combining these with AI.
This is the result of the experiment on legal affairs assistance using AI, etc. That's all the explanation from the office. Thank you very much.

Secretariat (Nakano): Mr. Hasui .
I am sorry that it is a very short time, but I would like to ask your opinions and questions until around 3:58 p.m.
Mr. Watanabe, thank you very much for your kind words.

Watanabe Member: Today, I was listening to a lot of interesting things, but this was the part I was most looking forward to, so thank you very much.
What I would like to say this time is that I would like to comment firmly as a member so that the actual experimental results in particular are not undervalued. I think there are various experimental results that have worked well and have not worked well, but looking at the results this time, I think it is quite a remarkable point that the accuracy has been improved by combining with the law API from the stage where it originally did not exist.
In addition, I would like to offer another point of view. It is true that in the field of legal affairs, we are seeking close to 100%, but even when I am actually discussing the Housing Accommodation Business Act on Airbnb, we lawyers cannot write a counterproposal even if we are actually asked to do so.
In that sense, even if it is not 100% accurate at the level, I believe that it is extremely useful for the administration and private sector to be able to create a text that is similar to the text, although it is not unexpected, by using such a tool. Therefore, it is not wrong to say that we are moving to the point that it is 100% correct to use it within the administration. On the other hand, I would like to emphasize that even the current results are quite socially meaningful in the sense that tools and knowledge that we, the private sector side, have not had so far are provided. I would like to point out that the current results are quite socially meaningful in the sense that we, the private sector side, have not had so far.
I'm sorry to speak so fast, but that's all.

Secretariat (Yamauchi) Thank you very much.
As you said, for example, 100% accuracy is necessary in the field of legal affairs, but I think there are certainly use cases that can be used in the current state in other places. The accuracy required differs depending on the use. As mentioned in Dr. Tsunoda's presentation earlier, in the evolution of AI, for example, new services will come out. So, I thought it would be great if research in legal affairs could be a driving force for creating services that can be used in other fields and for various purposes. Thank you very much.

Secretariat (Nakano): Mr. Hasui .
How is everyone else?

Member Yoneda (chat statement): I think there is not much time for , so I will write it in the chat column, but I thought that various experiments were interesting.

Member Yasuno (chat statement): , what I thought might be a good idea to try is an experiment to make feedback of products. (I think it is highly possible that it is just not written.) In a recent paper from Stanford University (https://arxiv.org/abs/2310.01783), when I made ChatGPT review a paper on Nature, unexpectedly useful feedback came out (the concordance rate with human reviewers was high). In legal affairs, for example, when I put a draft of a bill in, I thought that there is a possibility that I can receive feedback with a fairly high concordance rate with the input that may come out from my supervisor or the Legislative Bureau. It may be good to look at the trend of how much it matches.

Secretariat (Yamauchi): .
I think it is a proposal to improve the accuracy by including feedback on the content you wrote.

Yasuno Member: That's right. Draft feedback is often useful if it gives you a chance to think about things even if they are wrong, and I think it is used in various places, so I thought it would be good to try such things.

Secretariat (Yamauchi): Thank you very much. I will study it.

Secretariat (Nakano): Mr. Hasui .

Secretariat (Nakano): Mr. Hasui I am sorry that your time has been shortened, but I would like to conclude today's proceedings.
Finally, Mr. Hasui, Deputy Director-General of Digital Agency, would like to offer his greetings.
Mr. Hasui, nice to meet you.

Mr. Hasui: My name is Hasui from Digital Agency, .
Thank you very much for your various proposals and open discussions today.
We received reports from Daiichi Hoki Co., Ltd. and FRAIM Co., Ltd. on the progress of the survey and demonstration projects on the development and utilization of digitalization and law data for legal affairs. This project is expected to achieve Reform of Working Practices for national government employees, BPR, and prevention of errors in the law plan, which are extremely important now. This project is of great interest to each ministries and agencies, and at the same time, it will lead to the construction of a foundation for the development of the law Base Registry. We have also been introduced to Issue from the demonstration so far, and we would like you to steadily advance your efforts while receiving opinions from this working group.
In addition, Mr. Tsunoda and Associate Professor Kano gave us very valuable suggestions on AI and law based on the accumulation of research and efforts so far. I am actually in charge of AI at present, and I received very valuable suggestions from that perspective, too. In addition, I am in charge of both law and Mr. Nakano. When I searched for law in the past, there were 5,000 or 10,000 cases, and I think it has evolved a lot now, considering that I was thrown away in that moment. Based on what you pointed out today, I would like to conduct various trials while determining the fields in which AI is strong and how to utilize it, and proceed with the examination of the future image drawn by the Digital Legal Roadmap.
I would like to conclude my remarks. Thank you very much for your continued support.

Secretariat (Nakano): Mr. Hasui , Deputy Director-General, thank you for your greetings.
Well, now that it's time, that's all for today's agenda.
I am very sorry that my progress has been slow and I have not had enough time to ask questions and receive opinions.
If you do not have any objection, we will prepare the minutes of today's meeting and disclose them after you have confirmed them. All materials will also be disclosed.
With that said, I would like to conclude today's meeting. Thank you very much for joining us today.

End

Related Meetings

Relevant policies