Skip to content
GitLab
Projects Groups Topics Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Register
  • Sign in
  • S sdv-lammps
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributor statistics
    • Graph
    • Compare revisions
  • Issues 100
    • Issues 100
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Container Registry
    • Terraform modules
  • Monitor
    • Monitor
    • Metrics
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • djurado
  • sdv-lammps
  • Wiki
  • Home

Home · Changes

Page history
Update Home authored Jan 10, 2023 by djurado's avatar djurado
Hide whitespace changes
Inline Side-by-side
Home.md
View page @ 3dd6c73e
......@@ -118,12 +118,14 @@ It is important to consider carefully the implications of this when vectorizing.
Vectorization is based on SIMD processing (single instruction, multiple data), but different code paths require different instructions.
With the RISC-V vector extension, this can be overcame with the help of masked instructions, which allows restricting writing the result of a vector instructions to only certain elements using a bitmask.
For instance, which proportion of the atom pairs belong to the *do nothing* group?
Even when using masked instructions to avoid updating *do nothing* atoms, instructions take some time to execute.
So, as opposed to the serial version, a *do nothing* atom has the same cost in time as any other atom in the vectorized version with masked instructions.
For instance, which proportion of the atom pair interactions (or inner loop iterations) belong to the *do nothing* group?
Even when using masked instructions to avoid updating *do nothing* itneractions, instructions take some time to execute.
So, as opposed to the serial version, a *do nothing* interactions has the same cost in time as any other atom in the vectorized version with masked instructions.
Before starting working on the vectorization, the code was modified to count the number of interactions that belong to each category.
The flowchart shows the average number number of interactions (for a single `i` atom in a timestep) that belong to each category, and the arrows show the same information in percentage form.
Black values show data for the default protein input, while red values correspond to the modified input described in section
Before starting working on the vectorization, the code de was modified to count the number of atoms that belong to each category.
The flowchart shows in black, the average number number of
### Managing 32 bit and 64 data types
......
Clone repository
  • 32 bit and 64 bit data types
  • 32 bit to 64 bit
  • Home
  • Implementation
  • Loop size
  • Managing code paths
  • Overview of Algorithm and Data structures
  • Specialization
  • _sidebar
  • union_int_float_t