Bekhouche commited on
Commit
d181a48
Β·
1 Parent(s): 2f065ab

Add main files

Browse files
Files changed (4) hide show
  1. ACC.py +0 -0
  2. README.md +34 -7
  3. app.py +40 -0
  4. requirements.txt +2 -0
ACC.py ADDED
File without changes
README.md CHANGED
@@ -1,13 +1,40 @@
1
  ---
2
- title: ACC
3
- emoji: πŸ“š
4
- colorFrom: red
5
- colorTo: blue
6
  sdk: gradio
7
- sdk_version: 5.40.0
8
  app_file: app.py
9
  pinned: false
10
- short_description: Accuracy
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
 
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Accuracy
3
+ emoji: πŸ“ˆ
4
+ colorFrom: blue
5
+ colorTo: red
6
  sdk: gradio
7
+ sdk_version: 4.44.0
8
  app_file: app.py
9
  pinned: false
10
+ license: apache-2.0
11
+ tags:
12
+ - evaluate
13
+ - metric
14
+ short_description: Accuracy (ACC)
15
+ description: >-
16
+ The Accuracy (ACC) metric is used to measure the proportion of correctly
17
+ predicted sequences compared to the total number of sequences. This metric can
18
+ handle both integer and string inputs by converting them to strings for
19
+ comparison. The ACC ranges from 0 to 1, where 1 indicates perfect accuracy
20
+ (all predictions are correct) and 0 indicates complete failure (no predictions
21
+ are correct). It is particularly useful in tasks such as OCR, digit
22
+ recognition, sequence prediction, and any task where exact matches are
23
+ required. The accuracy can be calculated using the formula: ACC = (Number of
24
+ Correct Predictions) / (Total Number of Predictions) Where a prediction is
25
+ considered correct if it exactly matches the ground truth sequence after
26
+ converting both to strings.
27
  ---
28
+ # Metric Card for ACC
29
 
30
+ ## Metric Description
31
+
32
+ The Accuracy (ACC) metric is used to measure the proportion of correctly predicted sequences compared to the total number of sequences. This metric can handle both integer and string inputs by converting them to strings for comparison.
33
+ The ACC ranges from 0 to 1, where 1 indicates perfect accuracy (all predictions are correct) and 0 indicates complete failure (no predictions are correct).
34
+ It is particularly useful in tasks such as OCR, digit recognition, sequence prediction, and any task where exact matches are required. The accuracy can be calculated using the formula:
35
+
36
+ ACC = (Number of Correct Predictions) / (Total Number of Predictions)
37
+
38
+ Where:
39
+
40
+ A prediction is considered correct if it exactly matches the ground truth sequence after converting both to strings.
app.py ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import evaluate
2
+ import gradio as gr
3
+
4
+ module = evaluate.load("Bekhouche/ACC")
5
+
6
+ def compute_acc(dataframe):
7
+ predictions = dataframe['Predictions'].tolist()
8
+ references = dataframe['References'].tolist()
9
+ if len(predictions) != len(references):
10
+ return "Error: Number of predictions and references must match!"
11
+ module.add_batch(predictions=predictions, references=references)
12
+ result = module.compute()
13
+ return result
14
+
15
+ def custom_launch_gradio_widget(module):
16
+ metric_info = module._info()
17
+
18
+ with gr.Blocks() as demo:
19
+ gr.Markdown(f"### {metric_info.description}")
20
+ gr.Markdown(f"**Citation:** {metric_info.citation}")
21
+ gr.Markdown(f"**Inputs Description:** {metric_info.inputs_description}")
22
+
23
+ input_data = gr.Dataframe(
24
+ headers=["Predictions", "References"],
25
+ row_count=1,
26
+ label="Input Predictions and References"
27
+ )
28
+
29
+ run_button = gr.Button("Run ACC")
30
+ output = gr.Textbox(label="ACC Score")
31
+
32
+ run_button.click(
33
+ compute_acc,
34
+ inputs=input_data,
35
+ outputs=output,
36
+ )
37
+
38
+ demo.launch()
39
+
40
+ custom_launch_gradio_widget(module)
requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ evaluate==0.4.3
2
+ gradio==3.50.0