Hello everyone,
I try to use tokenizer = GPT2Tokenizer.from_pretrained('gpt2') and saw that model_max_length was 1024, then I used gpt2-medium and it was also 1024.
Can the size of model_max_length be changed? If so, how do I do it? Because I always exceed the size of 1024 on my data