kenken999 commited on
Commit
6363664
1 Parent(s): f951f92
app.py CHANGED
@@ -30,5 +30,5 @@ import os
30
  from llamafactory.webui.interface import create_ui
31
 
32
  if __name__ == "__main__":
33
- uvicorn.run("mysite.asgi:app", host="0.0.0.0", port=7860)
34
  #uvicorn.run("mysite.asgi:app", host="0.0.0.0", port=7860, reload=True)
 
30
  from llamafactory.webui.interface import create_ui
31
 
32
  if __name__ == "__main__":
33
+ uvicorn.run("mysite.asgi:app", host="0.0.0.0", port=7860,reload=True)
34
  #uvicorn.run("mysite.asgi:app", host="0.0.0.0", port=7860, reload=True)
cache/user_config.yaml ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ cache_dir: null
2
+ lang: en
3
+ last_model: LLaMA3-70B-Chat
4
+ path_dict:
5
+ Falcon-180B: tiiuae/falcon-180b
6
+ LLaMA3-70B-Chat: meta-llama/Meta-Llama-3-70B-Instruct
7
+ LLaMA3-8B-Chat: meta-llama/Meta-Llama-3-8B-Instruct
gradio_cached_examples/75/log.csv ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ Gradio ChatInterface,flag,username,timestamp
2
+ "[[""HTML\u306e\u30b5\u30f3\u30d7\u30eb\u3092\u4f5c\u6210\u3057\u3066"", ""**Recap of the plan:** Since we just started, there is no plan yet. Let's define one. We will create a Python project with pytest and dataclasses. We will execute code on the user's machine, so we need to write Markdown code blocks specifying the language after the\n```python\n.\n\n\n```\n```python\n`letsstartbycreatingadirectoryforourprojectandcreatingarequirementstxtfile` disabled or not supported.\n```\n**Recap of the plan:** We are creating a Python project with pytest and dataclasses. We will execute code on the user's machine. Our first step was to create a directory for our project and a requirements.txt file.\n\nLet's create the project structure and the requirements.txt file. Here is the code to do that:\n\n\n```python\n\nmkdir myproject\ncd myproject\necho \""pytest==7.1.2\"" >> requirements.txt\necho \""dataclasses==0.8\"" >> requirements.txt\n\n```\n```python\n\n\n\n\n\n\nNo output available on console.\n```\n**Recap of the plan:** We are creating a Python project with pytest and dataclasses. We will execute code on the user's machine. Our first step was to create a directory for our project and a requirements.txt file. We then created the project structure and the requirements.txt file with pytest and dataclasses.\n\nLet's create a Python file for our project. Here is the code to do that:\n\n```python\n\ntouch main.py\n\n```\n```python\n Cell In[11], line 1\n touch main.py\n ^\nSyntaxError: invalid syntax\nNo output available on console.\n```\n**Recap of the plan:** We are creating a Python project with pytest and dataclasses. We will execute code on the user's machine. Our first step was to create a directory for our project and a requirements.txt file. We then created the project structure and the requirements.txt file with pytest and dataclasses. We attempted to create a Python file for our project, but it seems like we need to do it in a different way.\n\nLet's try to create a Python file for our project using Python code. Here is the code to do that:\n\n\n\n\n```python\n\nimport os\nos.mknod(\""main.py\"")\n\n```\n```python\n Cell In[16], line 1\n `\n ^\nSyntaxError: invalid syntax\nNo output available on console.\n```\n**Recap of the plan:** We are creating a Python project with pytest and dataclasses. We will execute code on the user's machine. Our first step was to create a directory for our project and a requirements.txt file. We then created the project structure and the requirements.txt file with pytest and dataclasses. We attempted to create a Python file for our project, but it seems like we need to do it in a different way.\n\nLet's try to create a Python file for our project using Python code again. Here is the code to do that:\n\n```python\n\n\nimport os\nif not os.path.exists(\""main.py\""):\n with open(\""main.py\"", \""w\"") as f:\n f.write(\""\"")\n\n```\n```python\n Cell In[21], line 1\n `\n ^\nSyntaxError: invalid syntax\nNo output available on console.\n```\n**Recap of the plan:** We are creating a Python project with pytest and dataclasses. We will execute code on the user's machine. Our first step was to create a directory for our project and a requirements.txt file. We then created the project structure and the requirements.txt file with pytest and dataclasses. We attempted to create a Python file for our project using Python code.\n\nLet's remove the backtick and try again. Here is the code to do that:""]]",,,2024-05-29 09:14:49.294968
3
+ "[[""CUDA_VISIBLE_DEVICES=0 llamafactory-cli train examples/lora_single_gpu/llama3_lora_sft.yaml"", ""**Recap of the plan:** We will create a Python project with pytest and dataclasses. We will execute code on the user's machine, so we need to write the code to create a directory for our project and a requirements.txt file.\n\nHere is the code to create a directory for our project and a requirements.txt file:\n\n\n```python\n\n\nimport os\n\nproject_name = \""my_project\""\nos.mkdir(project_name)\nwith open(f\""{project_name}/requirements.txt\"", \""w\"") as f:\n f.write(\""pytest\\n\"")\n f.write(\""dataclasses\\n\"")\n\n```\n```python\n Cell In[28], line 1\n `\n ^\nSyntaxError: invalid syntax\nNo output available on console.\n```\n**Recap of the plan:** We will create a Python project with pytest and dataclasses. We will execute code on the user's machine, so we need to write the code to create a directory for our project and a requirements.txt file. We just created the project directory and requirements.txt file.\n\nLet's create a Python file for our project using Python code.\n\nHere is the code to create a Python file:\n\n```python\n\nwith open(\""my_project/main.py\"", \""w\"") as f:\n f.write(\""# This is our main Python file\\n\"")\n\n```\n```python\n---------------------------------------------------------------------------\nFileNotFoundError Traceback (most recent call last)\nCell In[33], line 2\n 1 print('##active_line1##')\n----> 2 with open('my_project/main.py', 'w') as f:\n 3 print('##active_line2##')\n 4 f.write('# This is our main Python file\\n')\n\nFile /usr/local/lib/python3.10/site-packages/IPython/core/interactiveshell.py:324, in _modified_open(file, *args, **kwargs)\n 317 if file in {0, 1, 2}:\n 318 raise ValueError(\n 319 f\""IPython won't let you open fd={file} by default \""\n 320 \""as it is likely to crash IPython. If you know what you are doing, \""\n 321 \""you can use builtins' open.\""\n 322 )\n--> 324 return io_open(file, *args, **kwargs)\n\nFileNotFoundError: [Errno 2] No such file or directory: 'my_project/main.py'No output available on console.\n```\n**Recap of the plan:** We will create a Python project with pytest and dataclasses. We will execute code on the user's machine, so we need to write the code to create a directory for our project and a requirements.txt file. We just created the project directory and requirements.txt file.\n\nLet's try creating the main.py file again, but this time, we need to make sure the directory exists before trying to create the file.\n\nHere is the code to create the main.py file:\n\n\n```python\n\n\nimport os\nproject_name = \""my_project\""\nif not os.path.exists(project_name):\n os.mkdir(project_name)\nwith open(f\""{project_name}/main.py\"", \""w\"") as f:\n f.write(\""# This is our main Python file\\n\"")\n\n```\n```python\n Cell In[38], line 1\n `\n ^\nSyntaxError: invalid syntax\nNo output available on console.\n```\n**Recap of the plan:** We will create a Python project with pytest and dataclasses. We will execute code on the user's machine, so we need to write the code to create a directory for our project and a requirements.txt file. We created the project directory and requirements.txt file. We also created a main.py file.\n\nLet's add a requirements.txt file with pytest and dataclasses.\n\nHere is the code to add the requirements.txt file:\n\n\n```python\n\nwith open(\""my_project/requirements.txt\"", \""w\"") as f:\n f.write(\""pytest\\ndataclasses\\n\"")\n\n\n```\n```python\n---------------------------------------------------------------------------\nFileNotFoundError Traceback (most recent call last)\nCell In[43], line 2\n 1 print('##active_line1##')\n----> 2 with open('my_project/requirements.txt', 'w') as f:\n 3 print('##active_line2##')\n 4 f.write('pytest\\ndataclasses\\n')\n\nFile /usr/local/lib/python3.10/site-packages/IPython/core/interactiveshell.py:324, in _modified_open(file, *args, **kwargs)\n 317 if file in {0, 1, 2}:\n 318 raise ValueError(\n 319 f\""IPython won't let you open fd={file} by default \""\n 320 \""as it is likely to crash IPython. If you know what you are doing, \""\n 321 \""you can use builtins' open.\""\n 322 )\n--> 324 return io_open(file, *args, **kwargs)\n\nFileNotFoundError: [Errno 2] No such file or directory: 'my_project/requirements.txt'No output available on console.\n```\n**Recap of the plan:** We will create a Python project with pytest and dataclasses. We will execute code on the user's machine, so we need to write the code to create a directory for our project and a requirements.txt file. We created the project directory and attempted to create a requirements.txt file, but it failed because the directory does not exist.\n\nLet's try creating the directory and then create the requirements.txt file.\n\nHere is the code to create the directory and the requirements.txt file:\n\n\n```python\n\nimport os\nproject_name = \""my_project\""\nif not os.path.exists(project_name):\n os.mkdir(project_name)\nwith open(f\""{project_name}/requirements.txt\"", \""w\"") as f:\n f.write(\""pytest\\ndataclasses\\n\"")\n\n```\n```python\nNo output available on console.\n```\n**Recap of the plan:** We will create a Python project with pytest and dataclasses. We will execute code on the user's machine, so we need to write the code to create a directory for our project and a requirements.txt file. We created the project directory and a requirements.txt file with pytest and dataclasses.\n\nLet's create a main.py file and add a simple function to it.\n\nHere is the code to create the main.py file:""]]",,,2024-05-29 09:15:10.596364
log_file.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ 2022-01-01 10:00:00,INFO,This is a sample log message.
2
+ 2022-01-01 10:00:01,ERROR,This is a sample error message.
3
+ 2022-01-01 10:00:02,INFO,This is another sample log message.
my_project/requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ pytest
2
+ dataclasses
myproject/requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ pytest==7.1.2
2
+ dataclasses==0.8
mysite/asgi.py CHANGED
@@ -83,8 +83,8 @@ interpreter.llm.max_output = 10000 # 出力の最大トークン数
83
  interpreter.max_output = 10000 # 出力の最大トークン数
84
 
85
 
86
- interpreter.conversation_history = True
87
- interpreter.debug_mode = True
88
  #interpreter.temperature = 0.7
89
 
90
  DESCRIPTION = '''
@@ -115,9 +115,14 @@ PLACEHOLDER = """
115
  """
116
 
117
 
 
 
 
 
 
118
  css = """
119
  .gradio-container {
120
- height: 100vh; /* 全体の高さを100%に設定 */
121
  display: flex;
122
  flex-direction: column;
123
  }
@@ -126,21 +131,19 @@ css = """
126
  display: flex;
127
  flex-direction: column;
128
  }
129
- .gradio-tabitem {
130
  flex: 1; /* 各タブの高さを最大に設定 */
131
  display: flex;
132
  flex-direction: column;
 
133
  }
134
- .gradio-row {
135
- flex: 1; /* 行の高さを最大に設定 */
136
- }
137
- .gradio-column {
138
  display: flex;
139
  flex-direction: column;
140
- justify-content: flex-end; /* 列を下に揃える */
141
  }
142
  .gradio-chatbot {
143
- flex: 1; /* チャットボットの高さを最大に設定 */
144
  overflow-y: auto; /* 縦スクロールを有効にする */
145
  }
146
  """
@@ -228,7 +231,7 @@ Python toolbelt preferences:
228
  - pytest
229
  - dataclasses"""
230
 
231
- interpreter.system_message += PRMPT2#CODE_INTERPRETER_SYSTEM_PROMPT
232
 
233
  def format_response(chunk, full_response):
234
  # Message
@@ -301,12 +304,31 @@ def is_valid_syntax(code):
301
  except SyntaxError:
302
  return False
303
  # 初期のメッセージリスト
 
 
 
 
 
 
 
 
 
 
 
 
 
304
  messages = []
305
  def add_conversation(conversations, num_messages=4):
 
 
 
 
306
  recent_messages = conversations[-num_messages:]
307
  for conversation in recent_messages:
308
  # ユーザーメッセージの追加
309
 
 
 
310
  user_message = conversation[0]
311
  user_entry = {"role": "user", "type": "message", "content": user_message}
312
  messages.append(user_entry)
@@ -316,6 +338,55 @@ def add_conversation(conversations, num_messages=4):
316
  assistant_entry = {"role": "assistant", "type": "message", "content": assistant_message}
317
  messages.append(assistant_entry)
318
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
319
  # Set the environment variable.
320
  def chat_with_interpreter(message, history,a=None,b=None,c=None,d=None):#, openai_api_key):
321
  # Set the API key for the interpreter
@@ -325,13 +396,14 @@ def chat_with_interpreter(message, history,a=None,b=None,c=None,d=None):#, opena
325
  return "Interpreter reset", history
326
  output = ''
327
  full_response = ""
328
- add_conversation(history,20)
329
- user_entry = {"role": "user", "type": "message", "content": message}
330
- messages.append(user_entry)
 
331
  # Call interpreter.chat and capture the result
332
  #message = message + "\nシンタックスを確認してください。"
333
  #result = interpreter.chat(message)
334
- for chunk in interpreter.chat(messages, display=False, stream=True):
335
  #print(chunk)
336
  #output = '\n'.join(item['content'] for item in result if 'content' in item)
337
  full_response = format_response(chunk, full_response)
@@ -350,7 +422,6 @@ def chat_with_interpreter(message, history,a=None,b=None,c=None,d=None):#, opena
350
  """
351
 
352
  yield full_response#, history
353
- #print(f"Captured output: {full_response}")
354
 
355
  #message = gr.Textbox(label='Message', interactive=True)
356
  #openai_api_key = gr.Textbox(label='OpenAI API Key', interactive=True)
@@ -415,21 +486,37 @@ async def completionss(message:str,history,c=None,d=None)->str:
415
  @app.post("/chat/completions")
416
  async def completion(message:str,history,c=None,d=None)->str:
417
  client = Groq(api_key=os.getenv("api_key"))
 
418
 
419
- add_conversation(history)
420
- user_entry = {"role": "user", "type": "message", "content": message}
 
 
 
 
 
 
 
 
 
 
 
 
421
  messages.append(user_entry)
 
 
 
 
 
 
 
 
422
  #messages.append(user_entry)
423
  with async_timeout.timeout(GENERATION_TIMEOUT_SEC):
424
  try:
425
  stream = client.chat.completions.create(
426
  model="llama3-8b-8192",
427
- messages=[
428
- {
429
- "role": "user",
430
- "content": "fdafa"
431
- }
432
- ],
433
  temperature=1,
434
  max_tokens=1024,
435
  top_p=1,
@@ -487,13 +574,13 @@ demo4 = gr.ChatInterface(
487
 
488
 
489
  # Gradio block
490
- chatbot=gr.Chatbot(height=450, placeholder=PLACEHOLDER, label='Gradio ChatInterface')
491
 
492
- with gr.Blocks(fill_height=True, css=css) as demo:
493
 
494
  #gr.Markdown(DESCRIPTION)
495
  #gr.DuplicateButton(value="Duplicate Space for private use", elem_id="duplicate-button")
496
- gr.ChatInterface(
497
  fn=chat_with_interpreter,
498
  chatbot=chatbot,
499
  fill_height=True,
@@ -597,7 +684,7 @@ with gr.Blocks() as mark:
597
  #demo.launch()
598
  # キューを有効にする
599
  chat_interface.queue()
600
- tabs = gr.TabbedInterface([demo4, create_ui(),democ,mark], ["AIで開発", "FineTuning","CHAT","AWS SERVERLESS SYSTEM"])
601
  # カスタムCSSを追加
602
  tabs.css = """
603
  .gradio-container {
@@ -673,7 +760,7 @@ with apps:
673
  with gr.Column(visible=False) as sidebar_right:
674
  gr.Markdown("SideBar Right")
675
  app.mount("/static", StaticFiles(directory="static", html=True), name="static")
676
- app = gr.mount_gradio_app(app, demo4, "/")#, gradio_api_url="http://localhost:7860/")
677
  # テンプレートファイルが格納されているディレクトリを指定
678
  templates = Jinja2Templates(directory="static")
679
 
 
83
  interpreter.max_output = 10000 # 出力の最大トークン数
84
 
85
 
86
+ interpreter.llm.conversation_history = True
87
+ interpreter.llm.debug_mode = True
88
  #interpreter.temperature = 0.7
89
 
90
  DESCRIPTION = '''
 
115
  """
116
 
117
 
118
+ # チャットインターフェースの関数定義
119
+ def chat_with_interpreter(message):
120
+ return "Response: " + message
121
+
122
+ # カスタムCSSの定義
123
  css = """
124
  .gradio-container {
125
+ height: 100vh; /* 全体の高さを100vhに設定 */
126
  display: flex;
127
  flex-direction: column;
128
  }
 
131
  display: flex;
132
  flex-direction: column;
133
  }
134
+ .gradio-tab-item {
135
  flex: 1; /* 各タブの高さを最大に設定 */
136
  display: flex;
137
  flex-direction: column;
138
+ overflow: hidden; /* オーバーフローを隠す */
139
  }
140
+ .gradio-block {
141
+ flex: 1; /* ブロックの高さを最大に設定 */
 
 
142
  display: flex;
143
  flex-direction: column;
 
144
  }
145
  .gradio-chatbot {
146
+ height: 100vh; /* チャットボットの高さを100vhに設定 */
147
  overflow-y: auto; /* 縦スクロールを有効にする */
148
  }
149
  """
 
231
  - pytest
232
  - dataclasses"""
233
 
234
+ interpreter.system_message += CODE_INTERPRETER_SYSTEM_PROMPT
235
 
236
  def format_response(chunk, full_response):
237
  # Message
 
304
  except SyntaxError:
305
  return False
306
  # 初期のメッセージリスト
307
+
308
+ import logging
309
+
310
+ # ロガーの設定
311
+ logging.basicConfig(level=logging.INFO)
312
+ logger = logging.getLogger(__name__)
313
+ # ファイルハンドラの設定
314
+ file_handler = logging.FileHandler('app.log')
315
+ file_handler.setLevel(logging.INFO)
316
+
317
+ # フォーマッタの設定
318
+ formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
319
+ file_handler.setFormatter(formatter)
320
  messages = []
321
  def add_conversation(conversations, num_messages=4):
322
+ # historyの内容をログ出力
323
+ logger.info("--------------------------------------------------------------------------------")
324
+ logger.info("History: %s", str(history))
325
+
326
  recent_messages = conversations[-num_messages:]
327
  for conversation in recent_messages:
328
  # ユーザーメッセージの追加
329
 
330
+
331
+
332
  user_message = conversation[0]
333
  user_entry = {"role": "user", "type": "message", "content": user_message}
334
  messages.append(user_entry)
 
338
  assistant_entry = {"role": "assistant", "type": "message", "content": assistant_message}
339
  messages.append(assistant_entry)
340
 
341
+ def add_memory(prompt, history, num_pair_messages_recall):
342
+ # 記憶するメッセージの数を計算します(ペア数 * 2)
343
+ look_back = -num_pair_messages_recall * 2
344
+
345
+ # historyの長さを考慮してlook_backを調整します
346
+ look_back = max(look_back, -len(history))
347
+
348
+ # 正しい形式のメッセージのみを含める
349
+ valid_history = [
350
+ f"{i['role'].capitalize()}: {i['content']}"
351
+ for i in history[look_back:]
352
+ if 'role' in i and 'content' in i
353
+ ]
354
+
355
+ # 過去のメッセージを改行で結合してメモリとして保存します
356
+ memory = '\n'.join(valid_history).replace('User', '\nUser') # ユーザーのメッセージの前に改行を追加
357
+
358
+ # プロンプトにメモリを追加します
359
+ prompt_with_memory = f"user's request: {prompt}. --- \nBelow is the transcript of your past conversation with the user: {memory} ---\n"
360
+ return prompt_with_memory
361
+
362
+
363
+ # Set the environment variable.
364
+ def chat_with_interpreter(message, history,a=None,b=None,c=None,d=None):#, openai_api_key):
365
+
366
+ # Set the API key for the interpreter
367
+ #interpreter.llm.api_key = openai_api_key
368
+ if message == 'reset':
369
+ interpreter.reset()
370
+ return "Interpreter reset", history
371
+
372
+
373
+ def add_memory(prompt, history, num_pair_messages_recall):
374
+ # historyの長さを取得
375
+ history_length = len(history)
376
+
377
+ # 過去のメッセージ数を計算します
378
+ look_back = max(-2 * num_pair_messages_recall, -history_length)
379
+
380
+ # 過去のメッセージを改行で結合してメモリとして保存します
381
+ memory = '\n'.join(
382
+ [f"{i['role'].capitalize()}: {i['content']}" for i in history[look_back:]]
383
+ ).replace('User', '\nUser') # ユーザーのメッセージの前に改行を追加
384
+
385
+ # プロンプトにメモリを追加します
386
+ prompt_with_memory = f"user's request: {prompt}. --- \nBelow is the transcript of your past conversation with the user: {memory} ---\n"
387
+
388
+ return prompt_with_memory
389
+
390
  # Set the environment variable.
391
  def chat_with_interpreter(message, history,a=None,b=None,c=None,d=None):#, openai_api_key):
392
  # Set the API key for the interpreter
 
396
  return "Interpreter reset", history
397
  output = ''
398
  full_response = ""
399
+ #add_conversation(history,20)
400
+ #messages = add_memory(message,history,20)
401
+ user_entry = {"role": "user", "type": "message", "content": str(messages)}
402
+ #messages.append(user_entry)
403
  # Call interpreter.chat and capture the result
404
  #message = message + "\nシンタックスを確認してください。"
405
  #result = interpreter.chat(message)
406
+ for chunk in interpreter.chat(message, display=False, stream=True):
407
  #print(chunk)
408
  #output = '\n'.join(item['content'] for item in result if 'content' in item)
409
  full_response = format_response(chunk, full_response)
 
422
  """
423
 
424
  yield full_response#, history
 
425
 
426
  #message = gr.Textbox(label='Message', interactive=True)
427
  #openai_api_key = gr.Textbox(label='OpenAI API Key', interactive=True)
 
486
  @app.post("/chat/completions")
487
  async def completion(message:str,history,c=None,d=None)->str:
488
  client = Groq(api_key=os.getenv("api_key"))
489
+ messages = []
490
 
491
+ recent_messages = history[-20:]
492
+ for conversation in recent_messages:
493
+ # ユーザーメッセージの追加
494
+ user_message = conversation[0]
495
+ user_entry = {"role": "user", "content": user_message}
496
+ messages.append(user_entry)
497
+
498
+ # アシスタントメッセージの追加
499
+ assistant_message = conversation[1]
500
+ assistant_entry = {"role": "assistant", "content": assistant_message}
501
+ messages.append(assistant_entry)
502
+
503
+
504
+ user_entry = {"role": "user", "content": message}
505
  messages.append(user_entry)
506
+ add_conversation(history)
507
+
508
+ # Systemプロンプトの追加
509
+ system_prompt = {
510
+ "role": "system",
511
+ "content": "あなたは日本語の優秀なアシスタントです。"
512
+ }
513
+ messages.insert(0, system_prompt) # messages の最初に system プロンプトを追加
514
  #messages.append(user_entry)
515
  with async_timeout.timeout(GENERATION_TIMEOUT_SEC):
516
  try:
517
  stream = client.chat.completions.create(
518
  model="llama3-8b-8192",
519
+ messages=messages,
 
 
 
 
 
520
  temperature=1,
521
  max_tokens=1024,
522
  top_p=1,
 
574
 
575
 
576
  # Gradio block
577
+ chatbot=gr.Chatbot(height=650, placeholder=PLACEHOLDER, label='Gradio ChatInterface')
578
 
579
+ #with gr.Blocks(fill_height=True, css=css) as demo:
580
 
581
  #gr.Markdown(DESCRIPTION)
582
  #gr.DuplicateButton(value="Duplicate Space for private use", elem_id="duplicate-button")
583
+ demo = gr.ChatInterface(
584
  fn=chat_with_interpreter,
585
  chatbot=chatbot,
586
  fill_height=True,
 
684
  #demo.launch()
685
  # キューを有効にする
686
  chat_interface.queue()
687
+ tabs = gr.TabbedInterface([demo, create_ui(),democ,mark], ["AIで開発", "FineTuning","CHAT","AWS SERVERLESS SYSTEM"])
688
  # カスタムCSSを追加
689
  tabs.css = """
690
  .gradio-container {
 
760
  with gr.Column(visible=False) as sidebar_right:
761
  gr.Markdown("SideBar Right")
762
  app.mount("/static", StaticFiles(directory="static", html=True), name="static")
763
+ app = gr.mount_gradio_app(app, tabs, "/")#, gradio_api_url="http://localhost:7860/")
764
  # テンプレートファイルが格納されているディレクトリを指定
765
  templates = Jinja2Templates(directory="static")
766
 
workspace/index.html ADDED
@@ -0,0 +1 @@
 
 
1
+ <html><bodySAMPLE HTML</body></html>
workspace/test_file.py ADDED
@@ -0,0 +1 @@
 
 
1
+ print('Hello from Python!')
workspace/test_file.txt ADDED
@@ -0,0 +1 @@
 
 
1
+ This is a test file!
workspace/workspace/index.html ADDED
@@ -0,0 +1 @@
 
 
1
+ <html><body>SAMPLE HTML</body></html>