How to detect LLM with malicious scripts (runs during load or run time of LLM), models that leverage pickle . I am interested to detect potential vulnerability from file formats that support arbitrary code execution( like pickle)
submitted by /u/Icy-Percentage-5635
[comments]
Source link