Elon Musk @elonmusk
@O42nl @kirillgroshkov Tesla does INT8 inference. Way more efficient than FP16, but took us a lot of effort to overcome quantization errors. — PolitiTweet.org
Created
Tue Feb 28 19:54:53 +0000 2023
Likes
1,456
Retweets
114
Source
Twitter for iPhone
View Raw Data
JSON DataView on Twitter
Likely AvailableElon Musk @elonmusk
@O42nl @kirillgroshkov Tesla does INT8 inference. Way more efficient than FP16, but took us a lot of effort to overcome quantization errors. — PolitiTweet.org