Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about metric implementation #59

Open
marcosmacedo opened this issue Aug 25, 2024 · 1 comment
Open

Question about metric implementation #59

marcosmacedo opened this issue Aug 25, 2024 · 1 comment
Assignees

Comments

@marcosmacedo
Copy link

Hi,

Thank you for sharing your CodeBLEU package. Could you please explain why is there an or 1 expression in the condition below in your implementation?

+ theta * (dataflow_match_score or 1)

+ theta * (dataflow_match_score or 1)

This is different from the XLCoST implementation.

https://github.com/reddy-lab-code-research/XLCoST/blob/ad46a7df51ea9e88f37a2f7e6edc5cbe4d13b2f2/code/translation/evaluator/CodeBLEU/calc_code_bleu.py#L76

During my testing I get a CodeBLEU score of 0.25 even if all the weights of the metric are zero. Is that the intended behavior?

Thank you

@k4black
Copy link
Owner

k4black commented Aug 26, 2024

@marcosmacedo Hey, thank you for spotting this;
TBH i do not remember =)
If im not mistaken it relay on examples from the original articles, but let me check

On it

@k4black k4black self-assigned this Aug 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants