Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Changing the text size and weights in pictures #40

Open
ImranNust opened this issue Nov 19, 2024 · 4 comments
Open

Changing the text size and weights in pictures #40

ImranNust opened this issue Nov 19, 2024 · 4 comments

Comments

@ImranNust
Copy link

How can we change the fonts size and make them bold so that they become readable when we use these charts in our research papers? The text on x and y-axis on most of the graphs is extremely small compared to the size of the graph.

@AReinke
Copy link
Collaborator

AReinke commented Nov 25, 2024

Thanks for raising this issue, we have added it to our feature list. For now, you can directly generate the individual plots in R and change the fonts from there, for example:

p <- challengeR::stabilityByTask(ranking, max_size=8, ordering = names(ranking%>%consensus(method = "euclidean")))
p + theme(axis.text = element_text(size = 16),
        axis.title = element_text(size = 18),
        legend.text = element_text(size=16),
        legend.title = element_text(size=18),
        title = element_text(size=20))

@ImranNust
Copy link
Author

Thank you so much for your kind reply. Can you please help to get this composit plot
image

When I generate plot for ten algorithms, the package gives me a plot as shown above. However, when I try to generate plot for sixteen algorithms, it generates separate plots for separate algorithm.

@ImranNust
Copy link
Author

This is my code:

# Importing Visualization Library
library(challengeR)
# Loading our own DataFrame
data <- read.csv("./mydatacsv")

print(data)
challenge <- as.challenge(data,
                          # Specify which column contains the algorithms, 
                          # which column contains a test case identifier 
                          # and which contains the metric value:
                          by = "task",
                          algorithm = "alg_name", case = "case", value = "value", 
                          # Specify if small metric values are better
                          smallBetter = FALSE)
# Configuring Ranking

ranking <- challenge%>%rankThenAggregate(FUN = mean,
                                         ties.method = "min"
                                        )
# Perform bootstrapping with 1000 bootstrap samples with 5 active CPUS's (if needbe change parameter)
library(doParallel)
library(doRNG)
registerDoParallel(cores = 8) # args- num of active CPU cores  
registerDoRNG(123)
ranking_bootstrapped <- ranking%>%bootstrap(nboot = 1000, parallel = TRUE, progress = "none")
stopImplicitCluster()
# Define the function
compute_and_rank <- function(ranking) {
  # Extract DSC and NSD data frames
  DSC <- ranking$matlist$DSC
  NSD <- ranking$matlist$NSD
  
  # Compute the mean of rank_mean for each team
  mean_rank_mean <- rowMeans(cbind(DSC$rank_mean, NSD$rank_mean))
  
  # Create a data frame with the computed means
  mean_ranking <- data.frame(
    team = rownames(DSC),
    mean_rank_mean = mean_rank_mean
  )
  
  # Rank the teams based on the mean_rank_mean
  mean_ranking <- mean_ranking[order(mean_ranking$mean_rank_mean), ]
  mean_ranking$rank <- 1:nrow(mean_ranking)
  
  # Format the output as specified
  output <- setNames(mean_ranking$rank, mean_ranking$team)
  attr(output, "method") <- "mean"
  
  return(output)
}



meanRanks <- compute_and_rank(ranking)
print(meanRanks)
ranking_bootstrapped %>% 
  report(consensus = meanRanks,
         title = "MyReport",
         file = "CombinedResult", 
         format = "HTML", # format can be "PDF", "HTML" or "Word"
         latex_engine = "pdflatex", #LaTeX engine for producing PDF output. Options are "pdflatex", "lualatex", and "xelatex"
         clean = TRUE
        ) 

I am new to R, so any help would be appreciated.

@emrekavur
Copy link

emrekavur commented Nov 26, 2024

Dear Imran,

If it is not confidential, is it possible to share the data with us? So that, we will try to replicate the error and find a solution. You can send it to us via this contact information: https://www.rankings-reloaded.de/contact

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants