runtime error
████████▉| 1.11G/1.11G [00:18<00:00, 41.5MB/s] Downloading (…)4820d9532637bd87b3f4: 100%|██████████| 1.11G/1.11G [00:18<00:00, 58.9MB/s] All model checkpoint layers were used when initializing TFXLMRobertaModel. All the layers of TFXLMRobertaModel were initialized from the model checkpoint at xlm-roberta-base. If your task is similar to the task the model of the checkpoint was trained on, you can already use TFXLMRobertaModel for predictions without further training. Caching examples at: '/home/user/app/gradio_cached_examples/15' 1/1 [==============================] - ETA: 0s 1/1 [==============================] - 3s 3s/step 1/1 [==============================] - ETA: 0s 1/1 [==============================] - 0s 81ms/step 1/1 [==============================] - ETA: 0s 1/1 [==============================] - 0s 82ms/step 1/1 [==============================] - ETA: 0s 1/1 [==============================] - 0s 52ms/step 1/1 [==============================] - ETA: 0s 1/1 [==============================] - 0s 141ms/step Traceback (most recent call last): File "app.py", line 164, in <module> iface = gr.Interface( File "/home/user/.local/lib/python3.8/site-packages/gradio/interface.py", line 456, in __init__ self.render_article() File "/home/user/.local/lib/python3.8/site-packages/gradio/blocks.py", line 1200, in __exit__ self.config = self.get_config_file() File "/home/user/.local/lib/python3.8/site-packages/gradio/blocks.py", line 1176, in get_config_file "input": list(block.input_api_info()), # type: ignore File "/home/user/.local/lib/python3.8/site-packages/gradio_client/serializing.py", line 41, in input_api_info return (api_info["serialized_input"][0], api_info["serialized_input"][1]) KeyError: 'serialized_input'
Container logs:
Fetching error logs...