У нас вы можете посмотреть бесплатно Efficiently Select Indices Without Replacement in PyTorch или скачать в максимальном доступном качестве, видео которое было загружено на ютуб. Для загрузки выберите вариант из формы ниже:
Если кнопки скачивания не
загрузились
НАЖМИТЕ ЗДЕСЬ или обновите страницу
Если возникают проблемы со скачиванием видео, пожалуйста напишите в поддержку по адресу внизу
страницы.
Спасибо за использование сервиса ClipSaver.ru
Discover how to select multiple indices from a PyTorch tensor effectively, without replacement, leveraging the random sampling capabilities of the library. --- This video is based on the question https://stackoverflow.com/q/74204664/ asked by the user 'sachinruk' ( https://stackoverflow.com/u/2530674/ ) and on the answer https://stackoverflow.com/a/74333810/ provided by the user 'Bob' ( https://stackoverflow.com/u/12750353/ ) at 'Stack Overflow' website. Thanks to these great users and Stackexchange community for their contributions. Visit these links for original content and any more details, such as alternate solutions, latest updates/developments on topic, comments, revision history etc. For example, the original title of the Question was: Pytorch - Selecting n indices without replacement from dimension x Also, Content (except music) licensed under CC BY-SA https://meta.stackexchange.com/help/l... The original Question post is licensed under the 'CC BY-SA 4.0' ( https://creativecommons.org/licenses/... ) license, and the original Answer post is licensed under the 'CC BY-SA 4.0' ( https://creativecommons.org/licenses/... ) license. If anything seems off to you, please feel free to write me at vlogize [AT] gmail [DOT] com. --- Efficiently Select Indices Without Replacement in PyTorch If you're working with PyTorch and need to randomly select n indices from a tensor's dimension without replacement, you may have come across some inefficiencies in the standard methods. Let's unpack this problem and explore more optimized approaches for selecting indices from your tensors, specifically when handling multi-dimensional data. The Problem Imagine you have a tensor of embeddings defined as: [[See Video to Reveal this Text or Code Snippet]] In this instance, emb_user has a shape of (64, 128, 256), where 64 could represent the number of samples, 128 is the dimension from which you want to sample, and 256 is the feature size. You want to sample 16 indices from the second dimension without replacement for each of the 64 instances. The common method might look like this: [[See Video to Reveal this Text or Code Snippet]] However, this approach can be inefficient, especially as it uses the multinomial sampling technique, which may not be optimal for your needs. The Solution Enhanced Random Sampling Instead of using the torch.multinomial method, you can use torch.randint and sort the results to enhance speed. Here’s a more efficient method: [[See Video to Reveal this Text or Code Snippet]] How It Works Random Integer Generation: torch.randint(0, 128 - 15, (64, 16)) generates random integers ensuring they fit within the bounds for the intended selection. Sorting: Using torch.sort() ensures the integers are in increasing order. Index Increment: Adding torch.arange(0, 16) forces the indices to be strictly increasing, thereby guaranteeing selection without replacement. Performance Comparison On various devices, timing comparisons yield: For device='cpu': Original method: 427 µs Optimized method: 784 µs For device='cuda': Original method: 135 µs Optimized method: 260 µs and 469 µs You can see that while the original methods may take longer, the optimized sampling retains a competitive advantage. Alternative Approaches If you are interested in experimenting further, here are a couple of additional methods: Cumulative Sum of Differences: [[See Video to Reveal this Text or Code Snippet]] Using Sorted Random Tensor: [[See Video to Reveal this Text or Code Snippet]] Conclusion Selecting indices without replacement from a PyTorch tensor can be an intensive operation, but by leveraging efficient sampling techniques and understanding how PyTorch operates with tensor indices, we can greatly improve performance. Whether you stick with the sorting method or explore alternative strategies, applying these techniques can help streamline your data processing tasks significantly. Now you have a solid foundation to make your index selection processes both effective and efficient in PyTorch. Happy coding!