U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • PLoS Comput Biol
  • v.17(12); 2021 Dec

Logo of ploscomp

Ten simple rules for effective presentation slides

Kristen m. naegle.

Biomedical Engineering and the Center for Public Health Genomics, University of Virginia, Charlottesville, Virginia, United States of America

Introduction

The “presentation slide” is the building block of all academic presentations, whether they are journal clubs, thesis committee meetings, short conference talks, or hour-long seminars. A slide is a single page projected on a screen, usually built on the premise of a title, body, and figures or tables and includes both what is shown and what is spoken about that slide. Multiple slides are strung together to tell the larger story of the presentation. While there have been excellent 10 simple rules on giving entire presentations [ 1 , 2 ], there was an absence in the fine details of how to design a slide for optimal effect—such as the design elements that allow slides to convey meaningful information, to keep the audience engaged and informed, and to deliver the information intended and in the time frame allowed. As all research presentations seek to teach, effective slide design borrows from the same principles as effective teaching, including the consideration of cognitive processing your audience is relying on to organize, process, and retain information. This is written for anyone who needs to prepare slides from any length scale and for most purposes of conveying research to broad audiences. The rules are broken into 3 primary areas. Rules 1 to 5 are about optimizing the scope of each slide. Rules 6 to 8 are about principles around designing elements of the slide. Rules 9 to 10 are about preparing for your presentation, with the slides as the central focus of that preparation.

Rule 1: Include only one idea per slide

Each slide should have one central objective to deliver—the main idea or question [ 3 – 5 ]. Often, this means breaking complex ideas down into manageable pieces (see Fig 1 , where “background” information has been split into 2 key concepts). In another example, if you are presenting a complex computational approach in a large flow diagram, introduce it in smaller units, building it up until you finish with the entire diagram. The progressive buildup of complex information means that audiences are prepared to understand the whole picture, once you have dedicated time to each of the parts. You can accomplish the buildup of components in several ways—for example, using presentation software to cover/uncover information. Personally, I choose to create separate slides for each piece of information content I introduce—where the final slide has the entire diagram, and I use cropping or a cover on duplicated slides that come before to hide what I’m not yet ready to include. I use this method in order to ensure that each slide in my deck truly presents one specific idea (the new content) and the amount of the new information on that slide can be described in 1 minute (Rule 2), but it comes with the trade-off—a change to the format of one of the slides in the series often means changes to all slides.

An external file that holds a picture, illustration, etc.
Object name is pcbi.1009554.g001.jpg

Top left: A background slide that describes the background material on a project from my lab. The slide was created using a PowerPoint Design Template, which had to be modified to increase default text sizes for this figure (i.e., the default text sizes are even worse than shown here). Bottom row: The 2 new slides that break up the content into 2 explicit ideas about the background, using a central graphic. In the first slide, the graphic is an explicit example of the SH2 domain of PI3-kinase interacting with a phosphorylation site (Y754) on the PDGFR to describe the important details of what an SH2 domain and phosphotyrosine ligand are and how they interact. I use that same graphic in the second slide to generalize all binding events and include redundant text to drive home the central message (a lot of possible interactions might occur in the human proteome, more than we can currently measure). Top right highlights which rules were used to move from the original slide to the new slide. Specific changes as highlighted by Rule 7 include increasing contrast by changing the background color, increasing font size, changing to sans serif fonts, and removing all capital text and underlining (using bold to draw attention). PDGFR, platelet-derived growth factor receptor.

Rule 2: Spend only 1 minute per slide

When you present your slide in the talk, it should take 1 minute or less to discuss. This rule is really helpful for planning purposes—a 20-minute presentation should have somewhere around 20 slides. Also, frequently giving your audience new information to feast on helps keep them engaged. During practice, if you find yourself spending more than a minute on a slide, there’s too much for that one slide—it’s time to break up the content into multiple slides or even remove information that is not wholly central to the story you are trying to tell. Reduce, reduce, reduce, until you get to a single message, clearly described, which takes less than 1 minute to present.

Rule 3: Make use of your heading

When each slide conveys only one message, use the heading of that slide to write exactly the message you are trying to deliver. Instead of titling the slide “Results,” try “CTNND1 is central to metastasis” or “False-positive rates are highly sample specific.” Use this landmark signpost to ensure that all the content on that slide is related exactly to the heading and only the heading. Think of the slide heading as the introductory or concluding sentence of a paragraph and the slide content the rest of the paragraph that supports the main point of the paragraph. An audience member should be able to follow along with you in the “paragraph” and come to the same conclusion sentence as your header at the end of the slide.

Rule 4: Include only essential points

While you are speaking, audience members’ eyes and minds will be wandering over your slide. If you have a comment, detail, or figure on a slide, have a plan to explicitly identify and talk about it. If you don’t think it’s important enough to spend time on, then don’t have it on your slide. This is especially important when faculty are present. I often tell students that thesis committee members are like cats: If you put a shiny bauble in front of them, they’ll go after it. Be sure to only put the shiny baubles on slides that you want them to focus on. Putting together a thesis meeting for only faculty is really an exercise in herding cats (if you have cats, you know this is no easy feat). Clear and concise slide design will go a long way in helping you corral those easily distracted faculty members.

Rule 5: Give credit, where credit is due

An exception to Rule 4 is to include proper citations or references to work on your slide. When adding citations, names of other researchers, or other types of credit, use a consistent style and method for adding this information to your slides. Your audience will then be able to easily partition this information from the other content. A common mistake people make is to think “I’ll add that reference later,” but I highly recommend you put the proper reference on the slide at the time you make it, before you forget where it came from. Finally, in certain kinds of presentations, credits can make it clear who did the work. For the faculty members heading labs, it is an effective way to connect your audience with the personnel in the lab who did the work, which is a great career booster for that person. For graduate students, it is an effective way to delineate your contribution to the work, especially in meetings where the goal is to establish your credentials for meeting the rigors of a PhD checkpoint.

Rule 6: Use graphics effectively

As a rule, you should almost never have slides that only contain text. Build your slides around good visualizations. It is a visual presentation after all, and as they say, a picture is worth a thousand words. However, on the flip side, don’t muddy the point of the slide by putting too many complex graphics on a single slide. A multipanel figure that you might include in a manuscript should often be broken into 1 panel per slide (see Rule 1 ). One way to ensure that you use the graphics effectively is to make a point to introduce the figure and its elements to the audience verbally, especially for data figures. For example, you might say the following: “This graph here shows the measured false-positive rate for an experiment and each point is a replicate of the experiment, the graph demonstrates …” If you have put too much on one slide to present in 1 minute (see Rule 2 ), then the complexity or number of the visualizations is too much for just one slide.

Rule 7: Design to avoid cognitive overload

The type of slide elements, the number of them, and how you present them all impact the ability for the audience to intake, organize, and remember the content. For example, a frequent mistake in slide design is to include full sentences, but reading and verbal processing use the same cognitive channels—therefore, an audience member can either read the slide, listen to you, or do some part of both (each poorly), as a result of cognitive overload [ 4 ]. The visual channel is separate, allowing images/videos to be processed with auditory information without cognitive overload [ 6 ] (Rule 6). As presentations are an exercise in listening, and not reading, do what you can to optimize the ability of the audience to listen. Use words sparingly as “guide posts” to you and the audience about major points of the slide. In fact, you can add short text fragments, redundant with the verbal component of the presentation, which has been shown to improve retention [ 7 ] (see Fig 1 for an example of redundant text that avoids cognitive overload). Be careful in the selection of a slide template to minimize accidentally adding elements that the audience must process, but are unimportant. David JP Phillips argues (and effectively demonstrates in his TEDx talk [ 5 ]) that the human brain can easily interpret 6 elements and more than that requires a 500% increase in human cognition load—so keep the total number of elements on the slide to 6 or less. Finally, in addition to the use of short text, white space, and the effective use of graphics/images, you can improve ease of cognitive processing further by considering color choices and font type and size. Here are a few suggestions for improving the experience for your audience, highlighting the importance of these elements for some specific groups:

  • Use high contrast colors and simple backgrounds with low to no color—for persons with dyslexia or visual impairment.
  • Use sans serif fonts and large font sizes (including figure legends), avoid italics, underlining (use bold font instead for emphasis), and all capital letters—for persons with dyslexia or visual impairment [ 8 ].
  • Use color combinations and palettes that can be understood by those with different forms of color blindness [ 9 ]. There are excellent tools available to identify colors to use and ways to simulate your presentation or figures as they might be seen by a person with color blindness (easily found by a web search).
  • In this increasing world of virtual presentation tools, consider practicing your talk with a closed captioning system capture your words. Use this to identify how to improve your speaking pace, volume, and annunciation to improve understanding by all members of your audience, but especially those with a hearing impairment.

Rule 8: Design the slide so that a distracted person gets the main takeaway

It is very difficult to stay focused on a presentation, especially if it is long or if it is part of a longer series of talks at a conference. Audience members may get distracted by an important email, or they may start dreaming of lunch. So, it’s important to look at your slide and ask “If they heard nothing I said, will they understand the key concept of this slide?” The other rules are set up to help with this, including clarity of the single point of the slide (Rule 1), titling it with a major conclusion (Rule 3), and the use of figures (Rule 6) and short text redundant to your verbal description (Rule 7). However, with each slide, step back and ask whether its main conclusion is conveyed, even if someone didn’t hear your accompanying dialog. Importantly, ask if the information on the slide is at the right level of abstraction. For example, do you have too many details about the experiment, which hides the conclusion of the experiment (i.e., breaking Rule 1)? If you are worried about not having enough details, keep a slide at the end of your slide deck (after your conclusions and acknowledgments) with the more detailed information that you can refer to during a question and answer period.

Rule 9: Iteratively improve slide design through practice

Well-designed slides that follow the first 8 rules are intended to help you deliver the message you intend and in the amount of time you intend to deliver it in. The best way to ensure that you nailed slide design for your presentation is to practice, typically a lot. The most important aspects of practicing a new presentation, with an eye toward slide design, are the following 2 key points: (1) practice to ensure that you hit, each time through, the most important points (for example, the text guide posts you left yourself and the title of the slide); and (2) practice to ensure that as you conclude the end of one slide, it leads directly to the next slide. Slide transitions, what you say as you end one slide and begin the next, are important to keeping the flow of the “story.” Practice is when I discover that the order of my presentation is poor or that I left myself too few guideposts to remember what was coming next. Additionally, during practice, the most frequent things I have to improve relate to Rule 2 (the slide takes too long to present, usually because I broke Rule 1, and I’m delivering too much information for one slide), Rule 4 (I have a nonessential detail on the slide), and Rule 5 (I forgot to give a key reference). The very best type of practice is in front of an audience (for example, your lab or peers), where, with fresh perspectives, they can help you identify places for improving slide content, design, and connections across the entirety of your talk.

Rule 10: Design to mitigate the impact of technical disasters

The real presentation almost never goes as we planned in our heads or during our practice. Maybe the speaker before you went over time and now you need to adjust. Maybe the computer the organizer is having you use won’t show your video. Maybe your internet is poor on the day you are giving a virtual presentation at a conference. Technical problems are routinely part of the practice of sharing your work through presentations. Hence, you can design your slides to limit the impact certain kinds of technical disasters create and also prepare alternate approaches. Here are just a few examples of the preparation you can do that will take you a long way toward avoiding a complete fiasco:

  • Save your presentation as a PDF—if the version of Keynote or PowerPoint on a host computer cause issues, you still have a functional copy that has a higher guarantee of compatibility.
  • In using videos, create a backup slide with screen shots of key results. For example, if I have a video of cell migration, I’ll be sure to have a copy of the start and end of the video, in case the video doesn’t play. Even if the video worked, you can pause on this backup slide and take the time to highlight the key results in words if someone could not see or understand the video.
  • Avoid animations, such as figures or text that flash/fly-in/etc. Surveys suggest that no one likes movement in presentations [ 3 , 4 ]. There is likely a cognitive underpinning to the almost universal distaste of pointless animations that relates to the idea proposed by Kosslyn and colleagues that animations are salient perceptual units that captures direct attention [ 4 ]. Although perceptual salience can be used to draw attention to and improve retention of specific points, if you use this approach for unnecessary/unimportant things (like animation of your bullet point text, fly-ins of figures, etc.), then you will distract your audience from the important content. Finally, animations cause additional processing burdens for people with visual impairments [ 10 ] and create opportunities for technical disasters if the software on the host system is not compatible with your planned animation.

Conclusions

These rules are just a start in creating more engaging presentations that increase audience retention of your material. However, there are wonderful resources on continuing on the journey of becoming an amazing public speaker, which includes understanding the psychology and neuroscience behind human perception and learning. For example, as highlighted in Rule 7, David JP Phillips has a wonderful TEDx talk on the subject [ 5 ], and “PowerPoint presentation flaws and failures: A psychological analysis,” by Kosslyn and colleagues is deeply detailed about a number of aspects of human cognition and presentation style [ 4 ]. There are many books on the topic, including the popular “Presentation Zen” by Garr Reynolds [ 11 ]. Finally, although briefly touched on here, the visualization of data is an entire topic of its own that is worth perfecting for both written and oral presentations of work, with fantastic resources like Edward Tufte’s “The Visual Display of Quantitative Information” [ 12 ] or the article “Visualization of Biomedical Data” by O’Donoghue and colleagues [ 13 ].

Acknowledgments

I would like to thank the countless presenters, colleagues, students, and mentors from which I have learned a great deal from on effective presentations. Also, a thank you to the wonderful resources published by organizations on how to increase inclusivity. A special thanks to Dr. Jason Papin and Dr. Michael Guertin on early feedback of this editorial.

Funding Statement

The author received no specific funding for this work.

Paper Presentation Requirements for CVPR 2021

Paper presentations at CVPR'21

All accepted papers should prepare a 5-minute video and poster PDF.   In addition, all papers will have a "live session" where attendees can meet the authors via a video link to discuss the paper.  See details below:

(1) A 5-minute video presentation

Authors should prepare a 5-minute video presentation of their work.  Videos will be made available on the conference platform for viewing at any time during the conference.  Attendees will be able to post questions to the authors asynchronously via a text Q/A box associated with each paper.  Videos should be encoded in an MP4 format, at 1920x1080p, using H264 compression.  Please ensure that your associated audio is clear and at the appropriate level . The maximum video file size allowed by the virtual platform is 2GB, however, we recommend a much smaller file size (e.g., 50MB) to avoid issues with uploading/downloading.  Note that our virtual platform provider has a feature where you can record audio to slides instead of an MP4 file. Please enable closed captioning.

(2) A poster-PDF

Authors should also prepare a poster PDF for their paper.   The virtual platform has a feature to browse all posters in a paper session.  Posters can also be used as a talking point for the live sessions.  A template for the poster-PDF can be found here:

https://www.dropbox.com/s/a3amiwnpfj316r1/cvpr21_poster_template.pptx

The other features on the Harvester are OPTIONAL and you are welcome to use in support of your paper. 

-------------

Poster Instructions If your workshop has posters, a template for poster-PDF can be found here: https://www.dropbox.com/s/a3amiwnpfj316r1/cvpr21_poster_template.pptx

Video instructions

For video on the virtual platform there is a limit of 5-minutes maximum for uploaded videos.

It is recommended that videos are prepared as MP4 using 1920x1080p in H264 compression, although other formats can be supported.  The only constraint for the virtual platform is that the video file is less than 2Gb, however, we recommend you aim for a smaller size to ensure easy upload.

All videos should have narration and, if possible, closed captioning. You may get someone else to do your voice over. Human narration is preferred but text-to-speech (TTS) is allowed if it makes the video easier to understand. 

When presenting, please introduce yourself. We recommend any text/math should use at least 24 point font (and ideally should be >32pt) as smaller fonts will not be readable on small mobile screens.

Examples of a 5-minute CVPR video

TIf this is your first CVPR paper and you need ideas on how to present your work, here are some nice examples on youtube from CVPR2020. 

https://www.youtube.com/watch?v=Pla8p9Nqlb8 https://www.youtube.com/watch?v=BNaIGI4VncM https://www.youtube.com/watch?v=t6TuAGZ9sRg  

Video/Slide Presentation Formatting

Most videos will be recordings of slide presentation (e.g., PPT).   If you prepare your presentation using PowerPoint, you can time your slides and save the presentation as a WMV video directly from PowerPoint. Some instructions on how to do this can be found here: https://support.office.com/en-us/article/Turn-your-presentation-into-a-video-c140551f-cb37-4818-b5d4-3e30815c3e83  

The Cadmium Harvester site you were invited to has a feature to record audio with your slides. This can be used for Workshop or Main Conference Papers.

WMV can be converted into MP4 Youtube or FFMPEG.

Alternatively, there are many free screen capture programs that directly produce proper MP4:

  • VLC- ( http://www.videolan.org/vlc/index.html )which works on all platforms.
  • OBS- https://obsproject.com /
  • Tinytake- ( http://tinytake.com /) for Windows.
  • For Mac there is the built-in QuickTime, which will need to be exported as MP4 because the QuickTime default is MOV).
  • For Mac users using keynote
  • You can do Play > Record Slideshow…, then record it. This seems to just record timestamps. But then you do File > Export to Movie, select “Slideshow Recording”, Custom resolution (1920×1080).

Loading metrics

Open Access

Ten simple rules for effective presentation slides

* E-mail: [email protected]

Affiliation Biomedical Engineering and the Center for Public Health Genomics, University of Virginia, Charlottesville, Virginia, United States of America

ORCID logo

  • Kristen M. Naegle

PLOS

Published: December 2, 2021

  • https://doi.org/10.1371/journal.pcbi.1009554
  • Reader Comments

Fig 1

Citation: Naegle KM (2021) Ten simple rules for effective presentation slides. PLoS Comput Biol 17(12): e1009554. https://doi.org/10.1371/journal.pcbi.1009554

Copyright: © 2021 Kristen M. Naegle. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Funding: The author received no specific funding for this work.

Competing interests: The author has declared no competing interests exist.

Introduction

The “presentation slide” is the building block of all academic presentations, whether they are journal clubs, thesis committee meetings, short conference talks, or hour-long seminars. A slide is a single page projected on a screen, usually built on the premise of a title, body, and figures or tables and includes both what is shown and what is spoken about that slide. Multiple slides are strung together to tell the larger story of the presentation. While there have been excellent 10 simple rules on giving entire presentations [ 1 , 2 ], there was an absence in the fine details of how to design a slide for optimal effect—such as the design elements that allow slides to convey meaningful information, to keep the audience engaged and informed, and to deliver the information intended and in the time frame allowed. As all research presentations seek to teach, effective slide design borrows from the same principles as effective teaching, including the consideration of cognitive processing your audience is relying on to organize, process, and retain information. This is written for anyone who needs to prepare slides from any length scale and for most purposes of conveying research to broad audiences. The rules are broken into 3 primary areas. Rules 1 to 5 are about optimizing the scope of each slide. Rules 6 to 8 are about principles around designing elements of the slide. Rules 9 to 10 are about preparing for your presentation, with the slides as the central focus of that preparation.

Rule 1: Include only one idea per slide

Each slide should have one central objective to deliver—the main idea or question [ 3 – 5 ]. Often, this means breaking complex ideas down into manageable pieces (see Fig 1 , where “background” information has been split into 2 key concepts). In another example, if you are presenting a complex computational approach in a large flow diagram, introduce it in smaller units, building it up until you finish with the entire diagram. The progressive buildup of complex information means that audiences are prepared to understand the whole picture, once you have dedicated time to each of the parts. You can accomplish the buildup of components in several ways—for example, using presentation software to cover/uncover information. Personally, I choose to create separate slides for each piece of information content I introduce—where the final slide has the entire diagram, and I use cropping or a cover on duplicated slides that come before to hide what I’m not yet ready to include. I use this method in order to ensure that each slide in my deck truly presents one specific idea (the new content) and the amount of the new information on that slide can be described in 1 minute (Rule 2), but it comes with the trade-off—a change to the format of one of the slides in the series often means changes to all slides.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

Top left: A background slide that describes the background material on a project from my lab. The slide was created using a PowerPoint Design Template, which had to be modified to increase default text sizes for this figure (i.e., the default text sizes are even worse than shown here). Bottom row: The 2 new slides that break up the content into 2 explicit ideas about the background, using a central graphic. In the first slide, the graphic is an explicit example of the SH2 domain of PI3-kinase interacting with a phosphorylation site (Y754) on the PDGFR to describe the important details of what an SH2 domain and phosphotyrosine ligand are and how they interact. I use that same graphic in the second slide to generalize all binding events and include redundant text to drive home the central message (a lot of possible interactions might occur in the human proteome, more than we can currently measure). Top right highlights which rules were used to move from the original slide to the new slide. Specific changes as highlighted by Rule 7 include increasing contrast by changing the background color, increasing font size, changing to sans serif fonts, and removing all capital text and underlining (using bold to draw attention). PDGFR, platelet-derived growth factor receptor.

https://doi.org/10.1371/journal.pcbi.1009554.g001

Rule 2: Spend only 1 minute per slide

When you present your slide in the talk, it should take 1 minute or less to discuss. This rule is really helpful for planning purposes—a 20-minute presentation should have somewhere around 20 slides. Also, frequently giving your audience new information to feast on helps keep them engaged. During practice, if you find yourself spending more than a minute on a slide, there’s too much for that one slide—it’s time to break up the content into multiple slides or even remove information that is not wholly central to the story you are trying to tell. Reduce, reduce, reduce, until you get to a single message, clearly described, which takes less than 1 minute to present.

Rule 3: Make use of your heading

When each slide conveys only one message, use the heading of that slide to write exactly the message you are trying to deliver. Instead of titling the slide “Results,” try “CTNND1 is central to metastasis” or “False-positive rates are highly sample specific.” Use this landmark signpost to ensure that all the content on that slide is related exactly to the heading and only the heading. Think of the slide heading as the introductory or concluding sentence of a paragraph and the slide content the rest of the paragraph that supports the main point of the paragraph. An audience member should be able to follow along with you in the “paragraph” and come to the same conclusion sentence as your header at the end of the slide.

Rule 4: Include only essential points

While you are speaking, audience members’ eyes and minds will be wandering over your slide. If you have a comment, detail, or figure on a slide, have a plan to explicitly identify and talk about it. If you don’t think it’s important enough to spend time on, then don’t have it on your slide. This is especially important when faculty are present. I often tell students that thesis committee members are like cats: If you put a shiny bauble in front of them, they’ll go after it. Be sure to only put the shiny baubles on slides that you want them to focus on. Putting together a thesis meeting for only faculty is really an exercise in herding cats (if you have cats, you know this is no easy feat). Clear and concise slide design will go a long way in helping you corral those easily distracted faculty members.

Rule 5: Give credit, where credit is due

An exception to Rule 4 is to include proper citations or references to work on your slide. When adding citations, names of other researchers, or other types of credit, use a consistent style and method for adding this information to your slides. Your audience will then be able to easily partition this information from the other content. A common mistake people make is to think “I’ll add that reference later,” but I highly recommend you put the proper reference on the slide at the time you make it, before you forget where it came from. Finally, in certain kinds of presentations, credits can make it clear who did the work. For the faculty members heading labs, it is an effective way to connect your audience with the personnel in the lab who did the work, which is a great career booster for that person. For graduate students, it is an effective way to delineate your contribution to the work, especially in meetings where the goal is to establish your credentials for meeting the rigors of a PhD checkpoint.

Rule 6: Use graphics effectively

As a rule, you should almost never have slides that only contain text. Build your slides around good visualizations. It is a visual presentation after all, and as they say, a picture is worth a thousand words. However, on the flip side, don’t muddy the point of the slide by putting too many complex graphics on a single slide. A multipanel figure that you might include in a manuscript should often be broken into 1 panel per slide (see Rule 1 ). One way to ensure that you use the graphics effectively is to make a point to introduce the figure and its elements to the audience verbally, especially for data figures. For example, you might say the following: “This graph here shows the measured false-positive rate for an experiment and each point is a replicate of the experiment, the graph demonstrates …” If you have put too much on one slide to present in 1 minute (see Rule 2 ), then the complexity or number of the visualizations is too much for just one slide.

Rule 7: Design to avoid cognitive overload

The type of slide elements, the number of them, and how you present them all impact the ability for the audience to intake, organize, and remember the content. For example, a frequent mistake in slide design is to include full sentences, but reading and verbal processing use the same cognitive channels—therefore, an audience member can either read the slide, listen to you, or do some part of both (each poorly), as a result of cognitive overload [ 4 ]. The visual channel is separate, allowing images/videos to be processed with auditory information without cognitive overload [ 6 ] (Rule 6). As presentations are an exercise in listening, and not reading, do what you can to optimize the ability of the audience to listen. Use words sparingly as “guide posts” to you and the audience about major points of the slide. In fact, you can add short text fragments, redundant with the verbal component of the presentation, which has been shown to improve retention [ 7 ] (see Fig 1 for an example of redundant text that avoids cognitive overload). Be careful in the selection of a slide template to minimize accidentally adding elements that the audience must process, but are unimportant. David JP Phillips argues (and effectively demonstrates in his TEDx talk [ 5 ]) that the human brain can easily interpret 6 elements and more than that requires a 500% increase in human cognition load—so keep the total number of elements on the slide to 6 or less. Finally, in addition to the use of short text, white space, and the effective use of graphics/images, you can improve ease of cognitive processing further by considering color choices and font type and size. Here are a few suggestions for improving the experience for your audience, highlighting the importance of these elements for some specific groups:

  • Use high contrast colors and simple backgrounds with low to no color—for persons with dyslexia or visual impairment.
  • Use sans serif fonts and large font sizes (including figure legends), avoid italics, underlining (use bold font instead for emphasis), and all capital letters—for persons with dyslexia or visual impairment [ 8 ].
  • Use color combinations and palettes that can be understood by those with different forms of color blindness [ 9 ]. There are excellent tools available to identify colors to use and ways to simulate your presentation or figures as they might be seen by a person with color blindness (easily found by a web search).
  • In this increasing world of virtual presentation tools, consider practicing your talk with a closed captioning system capture your words. Use this to identify how to improve your speaking pace, volume, and annunciation to improve understanding by all members of your audience, but especially those with a hearing impairment.

Rule 8: Design the slide so that a distracted person gets the main takeaway

It is very difficult to stay focused on a presentation, especially if it is long or if it is part of a longer series of talks at a conference. Audience members may get distracted by an important email, or they may start dreaming of lunch. So, it’s important to look at your slide and ask “If they heard nothing I said, will they understand the key concept of this slide?” The other rules are set up to help with this, including clarity of the single point of the slide (Rule 1), titling it with a major conclusion (Rule 3), and the use of figures (Rule 6) and short text redundant to your verbal description (Rule 7). However, with each slide, step back and ask whether its main conclusion is conveyed, even if someone didn’t hear your accompanying dialog. Importantly, ask if the information on the slide is at the right level of abstraction. For example, do you have too many details about the experiment, which hides the conclusion of the experiment (i.e., breaking Rule 1)? If you are worried about not having enough details, keep a slide at the end of your slide deck (after your conclusions and acknowledgments) with the more detailed information that you can refer to during a question and answer period.

Rule 9: Iteratively improve slide design through practice

Well-designed slides that follow the first 8 rules are intended to help you deliver the message you intend and in the amount of time you intend to deliver it in. The best way to ensure that you nailed slide design for your presentation is to practice, typically a lot. The most important aspects of practicing a new presentation, with an eye toward slide design, are the following 2 key points: (1) practice to ensure that you hit, each time through, the most important points (for example, the text guide posts you left yourself and the title of the slide); and (2) practice to ensure that as you conclude the end of one slide, it leads directly to the next slide. Slide transitions, what you say as you end one slide and begin the next, are important to keeping the flow of the “story.” Practice is when I discover that the order of my presentation is poor or that I left myself too few guideposts to remember what was coming next. Additionally, during practice, the most frequent things I have to improve relate to Rule 2 (the slide takes too long to present, usually because I broke Rule 1, and I’m delivering too much information for one slide), Rule 4 (I have a nonessential detail on the slide), and Rule 5 (I forgot to give a key reference). The very best type of practice is in front of an audience (for example, your lab or peers), where, with fresh perspectives, they can help you identify places for improving slide content, design, and connections across the entirety of your talk.

Rule 10: Design to mitigate the impact of technical disasters

The real presentation almost never goes as we planned in our heads or during our practice. Maybe the speaker before you went over time and now you need to adjust. Maybe the computer the organizer is having you use won’t show your video. Maybe your internet is poor on the day you are giving a virtual presentation at a conference. Technical problems are routinely part of the practice of sharing your work through presentations. Hence, you can design your slides to limit the impact certain kinds of technical disasters create and also prepare alternate approaches. Here are just a few examples of the preparation you can do that will take you a long way toward avoiding a complete fiasco:

  • Save your presentation as a PDF—if the version of Keynote or PowerPoint on a host computer cause issues, you still have a functional copy that has a higher guarantee of compatibility.
  • In using videos, create a backup slide with screen shots of key results. For example, if I have a video of cell migration, I’ll be sure to have a copy of the start and end of the video, in case the video doesn’t play. Even if the video worked, you can pause on this backup slide and take the time to highlight the key results in words if someone could not see or understand the video.
  • Avoid animations, such as figures or text that flash/fly-in/etc. Surveys suggest that no one likes movement in presentations [ 3 , 4 ]. There is likely a cognitive underpinning to the almost universal distaste of pointless animations that relates to the idea proposed by Kosslyn and colleagues that animations are salient perceptual units that captures direct attention [ 4 ]. Although perceptual salience can be used to draw attention to and improve retention of specific points, if you use this approach for unnecessary/unimportant things (like animation of your bullet point text, fly-ins of figures, etc.), then you will distract your audience from the important content. Finally, animations cause additional processing burdens for people with visual impairments [ 10 ] and create opportunities for technical disasters if the software on the host system is not compatible with your planned animation.

Conclusions

These rules are just a start in creating more engaging presentations that increase audience retention of your material. However, there are wonderful resources on continuing on the journey of becoming an amazing public speaker, which includes understanding the psychology and neuroscience behind human perception and learning. For example, as highlighted in Rule 7, David JP Phillips has a wonderful TEDx talk on the subject [ 5 ], and “PowerPoint presentation flaws and failures: A psychological analysis,” by Kosslyn and colleagues is deeply detailed about a number of aspects of human cognition and presentation style [ 4 ]. There are many books on the topic, including the popular “Presentation Zen” by Garr Reynolds [ 11 ]. Finally, although briefly touched on here, the visualization of data is an entire topic of its own that is worth perfecting for both written and oral presentations of work, with fantastic resources like Edward Tufte’s “The Visual Display of Quantitative Information” [ 12 ] or the article “Visualization of Biomedical Data” by O’Donoghue and colleagues [ 13 ].

Acknowledgments

I would like to thank the countless presenters, colleagues, students, and mentors from which I have learned a great deal from on effective presentations. Also, a thank you to the wonderful resources published by organizations on how to increase inclusivity. A special thanks to Dr. Jason Papin and Dr. Michael Guertin on early feedback of this editorial.

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 3. Teaching VUC for Making Better PowerPoint Presentations. n.d. Available from: https://cft.vanderbilt.edu/guides-sub-pages/making-better-powerpoint-presentations/#baddeley .
  • 8. Creating a dyslexia friendly workplace. Dyslexia friendly style guide. nd. Available from: https://www.bdadyslexia.org.uk/advice/employers/creating-a-dyslexia-friendly-workplace/dyslexia-friendly-style-guide .
  • 9. Cravit R. How to Use Color Blind Friendly Palettes to Make Your Charts Accessible. 2019. Available from: https://venngage.com/blog/color-blind-friendly-palette/ .
  • 10. Making your conference presentation more accessible to blind and partially sighted people. n.d. Available from: https://vocaleyes.co.uk/services/resources/guidelines-for-making-your-conference-presentation-more-accessible-to-blind-and-partially-sighted-people/ .
  • 11. Reynolds G. Presentation Zen: Simple Ideas on Presentation Design and Delivery. 2nd ed. New Riders Pub; 2011.
  • 12. Tufte ER. The Visual Display of Quantitative Information. 2nd ed. Graphics Press; 2001.

May 8-13, 2021 Online Virtual Conference (originally Yokohama, Japan)

For Authors

Guide to a successful presentation, standard technical support.

These specifications reflect what has typically been provided at previous CHI conferences. We will update this page to reflect the specifications for CHI 2021 at least two months before the conference.

  • Projector with a resolution of 1024 x 768
  • VGA connection
  • 1/8″ audio input to room speakers
  • Podium microphone

Example Presentations

Please see these examples of:

  • An example of a GOOD presentation slide deck
  • An example of a BAD presentation slide deck

Organizing your Content

DON’T give a presentation that will be comprehensible and interesting only to people who work in the same area as you. Please be aware that CHI is a multidisciplinary conference, with researchers and practitioners in attendance. DO ensure that even people who have little familiarity with your sub-area of HCI can understand at least the main points:

 

 

In fact, even the experts in your area don’t need to understand more than these points; for the rest, they can read the paper.

DON’T subject your audience to an “ordeal by a bulleted list.” Bulleted lists – especially those with large amounts of text – should be used only in exceptional cases. They are generally boring, abstract, unconvincing, and hard to read while the speaker is talking. DO present a series of “exhibits”: images, videos, system demos, diagrams, graphs, or tables. You can explain and elaborate on these exhibits while people are looking at them. In general, you don’t need to write what you say on the slides. Anyone who wants to see the points you made in black and white can read your paper. Carefully preparing an exhibit can take at least 10 times as long as dashing off a bulleted list, but your audience deserves nothing less.
DON’T use full sentences on your slides, or write out your entire talk on your slides. DO use text sparingly: Keep your points in short, concise, outline form. This will inform the viewer about the topic, and will also help you remember your key points for discussion.

Polishing the Details

DON’T put material on a slide that only the people in the front rows can read. Font sizes smaller than 28pt will likely be unreadable. DO pay special attention to types of material that often turn out to be illegible: screenshots and complex graphics. If an exhibit like this can’t be shown legibly as a whole, find a way to zoom in on individual parts of it as they are discussed.
DON’T clutter each slide with distracting logos and superfluous information such as the title of the talk or the name and date of the conference. DO present only material that helps you to convey your points effectively. If you must include your institution’s logo on each slide, make sure that it is not the most conspicuous and interesting element on any slide.

Giving the Presentation

DON’T risk fumbling desperately with the laptop at the beginning of your talk. DO arrive 20 minutes before your session to test the compatibility of your laptop with the projector. If you bring your presentation on a USB drive to present on someone else’s laptop, do everything possible to maximize its portability, and test the presentation at the earliest opportunity, leaving plenty of time to fix any problems (e.g., replacing missing fonts).
DON’T talk in such a way that only a fraction of the listeners can understand you. DO keep in mind the people who are not especially experienced in listening to English-language presentations. Native speakers of English need to avoid speaking too fast or colloquially; non-native speakers should enunciate clearly so that any foreign accent does not impair comprehension.

 

DO use your microphone, even if there are not many attendees in your session. Session rooms are still enormous, and you will be on stage. If the room is crowded, your talk may appear on a screen outside the room, and the way viewers will hear you is through your microphone.  Remember that the use of a microphone does not in itself guarantee that people in the back can hear you easily: speak up in a lively manner!

DON’T ignore your session chair’s time warnings. DO finish within your allotted time:

 

DO pay attention to the Session Chair’s countdown cards. You will receive warnings at five minutes prior, one minute prior, and when the time is up. If you do not stop when the time is called, your Session Chair will come to the stage to start the Q&A session.

 

DO rehearse your presentation before attending CHI, and cut content if you’re cutting it close.

DON’T rush to cover your remaining content if you are running out of time. DO stop at the end of the allotted time, even if you have content left.  No matter how hard you worked on your last few slides, the audience would rather have time for discussion, and the conference needs to keep on schedule.  Often presenters rush through their last few slides just for the sake of finishing, and it is almost never the case that useful information is conveyed during those slides.  If you’re behind, just say “I’ll stop here and take questions”. Any speaker who exceeds the allotted time will be interrupted mercilessly by the Session Chair, and time for questions will be reduced accordingly.

Answering Questions

DON’T end your presentation with a slide that contains only uninformative text like “Any questions?” DO conclude with a slide that helps the audience remember your talk the way you want them to remember it.  This is typically achieved by summarizing your main contributions; this is a rare case where a bulleted list may be appropriate. This slide will help people to think of important questions to ask, and will help them remember the key points of your talk, so they can go tell their colleagues how great it was.
DON’T use a question from the audience as a springboard to leap into the five minutes of your talk that you had to leave out because of the time limit. DO answer each question directly and concisely, without digressing into related topics. Give others a chance to ask their questions as well.

Recent Updates

  • May 10, 2021 Allyship Fireside Chat
  • May 9, 2021 CHI 2021 Virtual Platform Time Zone Problems
  • April 30, 2021 BLInner Blog #2 - Schedule, Chefs, and Recipes Update
  • March 28, 2021 Captions are Required for CHI Video Presentations
  • March 24, 2021 CHI 2021 Awards

How to Prepare a Paper Presentation?

  • First Online: 02 February 2019

Cite this chapter

paper presentation 2021

  • Timothy Lording 8 , 9 &
  • Jacques Menetrey 10 , 11  

2471 Accesses

Presenting your paper at a meeting is an important part of sharing your research with the orthopaedic community. Presentations are generally short and sharp, and careful preparation is key to ensure that the premise, findings, and relevance of your work are successfully conveyed. For most conference papers, the structure will mirror that of a scientific manuscript, with an introduction, materials and methods, results, discussion, and conclusions. Anticipation of potential questions will help to clarify your research for the audience.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

“publish or perish”—presentations at annual national orthopaedic meetings and their correlation with subsequent publication.

paper presentation 2021

Writing Scientific Manuscripts

paper presentation 2021

Ten Ways to Improve Getting a Scientific Manuscript Accepted

Elmansori A, Lording T, Dumas R, Elmajri K, Neyret P, Lustig S. Proximal tibial bony and meniscal slopes are higher in ACL injured subjects than controls: a comparative MRI study. Knee Surg Sports Traumatol Arthrosc. 2017;25:1598–605.

Article   Google Scholar  

Lording T, Corbo G, Bryant D, Burkhart TA, Getgood A. Rotational laxity control by the anterolateral ligament and the lateral meniscus is dependent on knee flexion angle: a cadaveric biomechanical study. Clin Orthop Relat Res. 2017;90:1922–8.

Google Scholar  

Shybut TB, Vega CE, Haddad J, Alexander JW, Gold JE, Noble PC, Lowe WR. Effect of lateral meniscal root tear on the stability of the anterior cruciate ligament-deficient knee. Am J Sports Med. 2015;43:905–11.

Simon RA, Everhart JS, Nagaraja HN, Chaudhari AM. A case-control study of anterior cruciate ligament volume, tibial plateau slopes and intercondylar notch dimensions in ACL-injured knees. J Biomech. 2010;43:1702–7.

Article   CAS   Google Scholar  

Sonnery-Cottet B, Mogos S, Thaunat M, Archbold P, Fayard JM, Freychet B, Clechet J, Chambat P. Proximal tibial anterior closing wedge osteotomy in repeat revision of anterior cruciate ligament reconstruction. Am J Sports Med. 2014;42:1873–80.

Stijak L, Herzog RF, Schai P. Is there an influence of the tibial slope of the lateral condyle on the ACL lesion? Knee Surg Sports Traumatol Arthrosc. 2007;16:112–7.

Download references

Author information

Authors and affiliations.

Melbourne Orthopaedic Group, Windsor, VIC, Australia

Timothy Lording

The Alfred Hospital, Melbourne, VIC, Australia

Centre de Médecine du Sport et de l’Exercice, Hirslanden Clinique la Colline, Geneva, Switzerland

Jacques Menetrey

Service de Chirurgie Orthopédique et Traumatologie de l’Appareil Moteur, University Hospital of Geneva, Geneva, Switzerland

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Timothy Lording .

Editor information

Editors and affiliations.

UPMC Rooney Sports Complex, University of Pittsburgh, Pittsburgh, PA, USA

Volker Musahl

Department of Orthopaedics, Sahlgrenska Academy, Gothenburg University, Sahlgrenska University Hospital, Gothenburg, Sweden

Jón Karlsson

Department of Orthopaedic Surgery and Traumatology, Kantonsspital Baselland (Bruderholz, Laufen und Liestal), Bruderholz, Switzerland

Michael T. Hirschmann

McMaster University, Hamilton, ON, Canada

Olufemi R. Ayeni

Hospital for Special Surgery, New York, NY, USA

Robert G. Marx

Department of Orthopaedic Surgery, NorthShore University HealthSystem, Evanston, IL, USA

Jason L. Koh

Institute for Medical Science in Sports, Osaka Health Science University, Osaka, Japan

Norimasa Nakamura

Rights and permissions

Reprints and permissions

Copyright information

© 2019 ISAKOS

About this chapter

Lording, T., Menetrey, J. (2019). How to Prepare a Paper Presentation?. In: Musahl, V., et al. Basic Methods Handbook for Clinical Orthopaedic Research. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-58254-1_24

Download citation

DOI : https://doi.org/10.1007/978-3-662-58254-1_24

Published : 02 February 2019

Publisher Name : Springer, Berlin, Heidelberg

Print ISBN : 978-3-662-58253-4

Online ISBN : 978-3-662-58254-1

eBook Packages : Medicine Medicine (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Presentation Schedule

MAIN CONFERENCE

Each paper will have a pre-recorded video and a PDF of the poster, whether accepted as an oral or poster presentation. The maximum duration of the video differs: videos of accepts posters are up to 5 minutes while videos of accepted orals can be up to 12 minutes. An asynchronous text chat will be available for each paper. Attendees can view the papers and videos on demand at any time. Authors will also have two scheduled Q&A sessions at the posted times below. Papers will be placed in pods for the Q&A sessions. 

A list of the papers and their respective session can be found  here . Please note each paper must be represented at two sessions – A & B.

All posted times are EDT. When the virtual site is live, you will be able to select which sessions you are interested in and it will populate your own schedule.

Paper Session 1A and 1B:   Tuesday, October 12, 10:00 AM – 11:00 AM and  Thursday, October 14, 5:00 PM – 6:00 PM

Paper Session 2A and 2B:   Tuesday, October 12, 11:00 AM – 12:00 PM and  Thursday, October 14, 6:00 PM – 7:00 PM

Paper Session 3A and 3B:   Tuesday, October 12 12:00 PM – 1:00 PM and  Thursday, October 14, 7:00 PM – 8:00 PM

Paper Session 4A and 4B:   Tuesday, October 12 3:00 PM – 4:00 PM and  Thursday, October 14, 8:00 AM – 9:00 AM

Paper Session 5A and 5B:  Tuesday, October 12, 4:00 PM – 5:00 PM and  Thursday, October 14, 9:00 AM – 10:00 AM

Paper Session 6A and 6B:   Tuesday, October 12, 5:00 PM – 6:00 PM and  Thursday, October 14, 10:00 AM – 11:00 AM

Paper Session 7A and 7B:  Wednesday, October 13, 8:00 AM – 9:00 AM and  Friday, October 15,  3:00 PM – 4:00 PM

Paper Session 8A and 8B:  Wednesday, October 13, 9:00 AM – 10:00 AM and  Friday, October 15, 4:00 PM – 5:00 PM

Paper Session 9A and 9B :  Wednesday, October 13, 10:00 AM – 11:00 AM and  Friday, October 15, 5:00 PM – 6:00 PM

Paper Session 10A and 10B:  Wednesday, October 13, 5:00 PM – 6:00 PM and  Friday, October 15, 10:00 AM – 11:00 AM

Paper Session 11A and 11B:  Wednesday, October 13, 6:00 PM – 7:00 PM and  Friday, October 15,11:00 AM – 12:00 PM

Paper Session 12A and 12B:  Wednesday, October 13, 7:00 PM – 8:00 PM and  Friday, October 15, 12:00 PM – 1:00 PM

paper presentation 2021

Paper Presentations

Tuesday, 5 October

Wednesday, 6 October
Thursday, 7 October

Paper Session 1: Displays

Tuesday, 5 October 9:30 CEST UTC+2 Track A

Session Chair: Yifan (Evan) Peng YouTube Stream (non-interactive) Discord Channel for Zoom link and Interactive Q&A Access (registered attendees only) :  Browser , App Post-Session Discussion with Authors in Gathertown Room: Q&A Track A

Edge-Guided Near-Eye Image Analysis for Head Mounted Displays

Zhimin Wang, Beihang University Yuxin Zhao, Beihang University Yunfei Liu, Beihang University Feng Lu, Beihang University

Conference Paper

Eye tracking provides an effective way for interaction in Augmented Reality (AR) Head Mounted Displays (HMDs). Current eye tracking techniques for AR HMDs require eye segmentation and ellipse fitting under near-infrared illumination. However, due to the low contrast between sclera and iris regions and unpredictable reflections, it is still challenging to accomplish accurate iris/pupil segmentation and the corresponding ellipse fitting tasks. In this paper, inspired by the fact that most essential information is encoded in the edge areas, we propose a novel near-eye image analysis method with edge maps as guidance. Specifically, we first utilize an Edge Extraction Network (E2-Net) to predict high-quality edge maps, which only contain eyelids and iris/pupil contours without other undesired edges. Then we feed the edge maps into an Edge-Guided Segmentation and Fitting Network (ESF-Net) for accurate segmentation and ellipse fitting. Extensive experimental results demonstrate that our method outperforms current state-of-the-art methods in near-eye image segmentation and ellipse fitting tasks, based on which we present applications of eye tracking with AR HMD.

Head-Mounted Display with Increased Downward Field of View Improves Presence and Sense of Self-Location

Kizashi Nakano, Nara Institute of Science and Technology Naoya Isoyama, Nara Institute of Science and Technology Diego Vilela Monteiro, Xi’an Jiaotong-Liverpool University Nobuchika Sakata, Ryukoku University Kiyoshi Kiyokawa, Nara Institute of Science and Technology Takuji Narumi, The University of Tokyo

Journal Paper

Common existing head-mounted displays (HMDs) for virtual reality (VR) provide users with a high presence and embodiment. However, the field of view (FoV) of a typical HMD for VR is about 90 to 110 [deg] in the diagonal direction and about 70 to 90 [deg] in the vertical direction, which is narrower than that of humans. Specifically, the downward FoV of conventional HMDs is too narrow to present the user avatar’s body and feet. To address this problem, we have developed a novel HMD with a pair of additional display units to increase the downward FoV by approximately 60 (10 + 50) [deg]. We comprehensively investigated the effects of the increased downward FoV on the sense of immersion that includes presence, sense of self-location (SoSL), sense of agency (SoA), and sense of body ownership (SoBO) during VR experience and on patterns of head movements and cybersickness as its secondary effects. As a result, it was clarified that the HMD with an increased FoV improved presence and SoSL. Also, it was confirmed that the user could see the object below with a head movement pattern close to the real behavior, and did not suffer from cybersickness. Moreover, the effect of the increased downward FoV on SoBO and SoA was limited since it was easier to perceive the misalignment between the real and virtual bodies.

Blending Shadows: Casting Shadows in Virtual and Real using Occlusion-Capable Augmented Reality Near-Eye Displays

Kiyosato Someya, Tokyo Institute of Technology Yuta Itoh, The University of Tokyo

The fundamental goal of augmented reality (AR) is to integrate virtual objects into the user’s perceived reality seamlessly. However, various issues hinder this integration. In particular, Optical See-Through (OST) AR is hampered by the need for light subtraction due to its see-through nature, making some basic rendering harder to realize. In this paper, we realize mutual shadows between real and virtual objects in OST AR to improve this virtual–real integration. Shadows are a classic problem in computer graphics, virtual reality, and video see-through AR, yet they have not been fully explored in OST AR due to the light subtraction requirement. We build a proof-of-concept system that combines a custom occlusion-capable OST display, global light source estimation, 3D registration, and ray-tracing-based rendering. We will demonstrate mutual shadows using a prototype and demonstrate its effectiveness by quantitatively evaluating shadows with the real environment using a perceptual visual metric.

Directionally Decomposing Structured Light for Projector Calibration

Masatoki Sugimoto, Osaka University Daisuke Iwai, Osaka University Koki Ishida, Osaka University Parinya Punpongsanon, Osaka University Kosuke Sato, Osaka University

Intrinsic projector calibration is essential in projection mapping (PM) applications, especially in dynamic PM. However, due to the shallow depth-of-field (DOF) of a projector, more work is needed to ensure accurate calibration. We aim to estimate the intrinsic parameters of a projector while avoiding the limitation of shallow DOF. As the core of our technique, we present a practical calibration device that requires a minimal working volume directly in front of the projector lens regardless of the projector’s focusing distance and aperture size. The device consists of a flat-bed scanner and pinhole-array masks. For calibration, a projector projects a series of structured light patterns in the device. The pinholes directionally decompose the structured light, and only the projected rays that pass through the pinholes hit the scanner plane. For each pinhole, we extract a ray passing through the optical center of the projector. Consequently, we regard the projector as a pinhole projector that projects the extracted rays only, and we calibrate the projector by applying the standard camera calibration technique, which assumes a pinhole camera model. Using a proof-of-concept prototype, we demonstrate that our technique can calibrate projectors with different focusing distances and aperture sizes at the same accuracy as a conventional method. Finally, we confirm that our technique can provide intrinsic parameters accurate enough for a dynamic PM application, even when a projector is placed too far from a projection target for a conventional method to calibrate the projector using a fiducial object of reasonable size.

Multifocal Stereoscopic Projection Mapping

Sorashi Kimura, Osaka University Daisuke Iwai, Osaka University Parinya Punpongsanon, Osaka University Kosuke Sato, Osaka University

Stereoscopic projection mapping (PM) allows a user to see a three-dimensional (3D) computer-generated (CG) object floating over physical surfaces of arbitrary shapes around us using projected imagery. However, the current stereoscopic PM technology only satisfies binocular cues and is not capable of providing correct focus cues, which causes a vergence–accommodation conflict (VAC). Therefore, we propose a multifocal approach to mitigate VAC in stereoscopic PM. Our primary technical contribution is to attach electrically focus-tunable lenses (ETLs) to active shutter glasses to control both vergence and accommodation. Specifically, we apply fast and periodical focal sweeps to the ETLs, which causes the “virtual image” (as an optical term) of a scene observed through the ETLs to move back and forth during each sweep period. A 3D CG object is projected from a synchronized high-speed projector only when the virtual image of the projected imagery is located at a desired distance. This provides an observer with the correct focus cues required. In this study, we solve three technical issues that are unique to stereoscopic PM: (1) The 3D CG object is displayed on non-planar and even moving surfaces; (2) the physical surfaces need to be shown without the focus modulation; (3) the shutter glasses additionally need to be synchronized with the ETLs and the projector. We also develop a novel compensation technique to deal with the “lens breathing” artifact that varies the retinal size of the virtual image through focal length modulation. Further, using a proof-of-concept prototype, we demonstrate that our technique can present the virtual image of a target 3D CG object at the correct depth. Finally, we validate the advantage provided by our technique by comparing it with conventional stereoscopic PM using a user study on a depth-matching task.

Paper Session 2: Gestures & Hand

Tuesday, 5 October 9:30 CEST UTC+2 Track B

Session Chair: Guofeng Zang YouTube Stream (non-interactive) Discord Channel for Zoom link and Interactive Q&A Access (registered attendees only) :  Browser , App Post-Session Discussion with Authors in Gathertown Room: Q&A Track B

Detection-Guided 3D Hand Tracking for Mobile AR Application

Yunlong Che, Oppo Yue Qi, BUAA

Interaction using bare hands is experiencing a growing interest in mobile-based Augmented Reality (AR). Existing RGB-based works fail to provide a practical solution to identifying rich details of the hand. In this paper, we present a detection-guided method capable of recovery 3D hand posture with a color camera. The proposed method consists of key-point detectors and 3D pose optimizer. The detectors first locate the 2D hand bounding box and then apply a lightweight network on the hand region to provide a pixel-wise like-hood of hand joints. The optimizer lifts the 3D pose from the estimated 2D joints in a model-fitting manner. To ensure the result plausibly, we encode the hand shape into the objective function. The estimated 3D posture allows flexible hand-to-mobile interaction in AR applications. We extensively evaluate the proposed approach on several challenging public datasets. The experimental results indicate the efficiency and effectiveness of the proposed method.

SAR: Spatial-Aware Regression for 3D Hand Pose and Mesh Reconstruction from a Monocular RGB Image

Xiaozheng Zheng, Beijing University of Posts and Telecommunications Pengfei Ren, Beijing University of Posts and Telecommunications Haifeng Sun, Beijing University of Posts and Telecommunications Jingyu Wang, Beijing University of Posts and Telecommunications Qi Qi, Beijing University of Posts and Telecommunications Jianxin Liao, Beijing University of Posts and Telecommunications

3D hand reconstruction is a popular research topic in recent years, which has great potential for VR/AR applications. However, due to the limited computational resource of VR/AR equipment, the reconstruction algorithm must balance accuracy and efficiency to make the users have a good experience. Nevertheless, current methods are not doing well in balancing accuracy and efficiency. Therefore, this paper proposes a novel framework that can achieve a fast and accurate 3D hand reconstruction. Our framework relies on three essential modules, including spatial-aware initial graph building (SAIGB), graph convolutional network (GCN) based belief maps regression (GBBMR), and pose-guided refinement (PGR). At first, given image feature maps extracted by convolutional neural networks, SAIGB builds a spatial-aware and compact initial feature graph. Each node in this graph represents a vertex of the mesh and has vertex-specific spatial information that is helpful for accurate and efficient regression. After that, GBBMR first utilizes adaptive-GCN to introduce interactions between vertices to capture short-range and long-range dependencies between vertices efficiently and flexibly. Then, it maps vertices’ features to belief maps that can model the uncertainty of predictions for more accurate predictions. Finally, we apply PGR to compress the redundant vertices’ belief maps to compact joints’ belief maps with the pose guidance and use these joints’ belief maps to refine previous predictions better to obtain more accurate and robust reconstruction results. Our method achieves state-of-the-art performance on four public benchmarks, FreiHAND, HO-3D, RHD, and STB. Moreover, our method can run at a speed of two to three times that of previous state-of-the-art methods. Our code is available at https://github.com/zxz267/SAR.

Two-hand Pose Estimation from the non-cropped RGB Image with Self-Attention Based Network

Zhoutao Sun, Beihang University Hu Yong, Beihang University Xukun Shen, Beihang University

Estimating the pose of two hands is a crucial problem for many human-computer interaction applications. Since most of the existing works utilize cropped images to predict the hand pose, they require a hand detection stage before pose estimation or input cropped images directly. In this paper, we propose the first real-time one-stage method for pose estimation from a single RGB image without hand tracking. Combining the self-attention mechanism with convolutional layers, the network we proposed is able to predict the 2.5D hand joints coordinate while locating the two hands regions. And to reduce the extra memory and computational consumption caused by self-attention, we proposed a linear attention structure with a spatial-reduction attention block called SRAN block. We demonstrate the effectiveness of each component in our network through the ablation study. And experiments on public datasets showed the competitive result with the state-of-the-art method.

STGAE: Spatial-Temporal Graph Auto-Encoder for Hand Motion Denoising

Kanglei Zhou, Beihang University Zhiyuan Cheng, Beihang University Hubert Shum, Durham University Frederick W. B. Li, Durham University Xiaohui Liang, Beihang University

Hand object interaction in mixed reality (MR) relies on the accurate tracking and estimation of human hands, which provide users with a sense of immersion. However, raw captured hand motion data always contains errors such as joints occlusion, dislocation, high-frequency noise, and involuntary jitter. Denoising and obtaining the hand motion data consistent with the user’s intention are of the utmost importance to enhance the interactive experience in MR. To this end, we propose an end-to-end method for hand motion denoising using the spatial-temporal graph auto-encoder (STGAE). The spatial and temporal patterns are recognized simultaneously by constructing the consecutive hand joint sequence as a spatial-temporal graph. Considering the complexity of the articulated hand structure, a simple yet effective partition strategy is proposed to model the physic-connected and symmetry-connected relationships. Graph convolution is applied to extract structural constraints of the hand, and a self-attention mechanism is to adjust the graph topology dynamically. Combining graph convolution and temporal convolution, a fundamental graph encoder or decoder block is proposed. We finally establish the hourglass residual auto-encoder to learn a manifold projection operation and a corresponding inverse projection through stacking these blocks. In this work, the proposed framework has been successfully used in hand motion data denoising with preserving structural constraints between joints. Extensive quantitative and qualitative experiments show that the proposed method has achieved better performance than the state-of-the-art approaches.

Classifying In-Place Gestures with End-to-End Point Cloud Learning

Lizhi Zhao, Northwest A&F University Xuequan Lu, Deakin University Min Zhao, Northwest A&F University Meili Wang, Northwest A&F University

Walking in place for moving through virtual environments has attracted noticeable attention recently. Recent attempts focused on training a classifier to recognize certain patterns of gestures (e.g., standing, walking, etc) with the use of neural networks like CNN or LSTM. Nevertheless, they often consider very few types of gestures and/or induce less desired latency in virtual environments. In this paper, we propose a novel framework for accurate and efficient classification of in-place gestures. Our key idea is to treat several consecutive frames as a “point cloud”. The HMD and two VIVE trackers provide three points in each frame, with each point consisting of 12-dimensional features (i.e., three-dimensional position coordinates, velocity, rotation, angular velocity). We create a dataset consisting of 9 gesture classes for virtual in-place locomotion. In addition to the supervised point-based network, we also take unsupervised domain adaptation into account due to inter-person variations. To this end, we develop an end-to-end joint framework involving both a supervised loss for supervised point learning and an unsupervised loss for unsupervised domain adaptation. Experiments demonstrate that our approach generates very promising outcomes, in terms of high overall classification accuracy (95.0%) and real-time performance (192ms latency). We will release our dataset and source code to the community.

The object at hand: Automated Editing for Mixed Reality Video Guidancefrom hand-object interactions

Yao Lu, University of Bristol Walterio Mayol-Cuevas, University of Bristol

In this paper, we concern with the problem of how to automatically extract the steps that compose hand activities. This is a key competency towards processing, monitoring and providing video guidance in Mixed Reality systems. We use egocentric vision to observe hand-object interactions in real-world tasks and automatically decompose a video into its constituent steps. Our approach combines hand-object interaction (HOI) detection, object similarity measurement and a finite state machine (FSM) representation to automatically edit videos into steps. We use a combination of Convolutional Neural Networks (CNNs) and the FSM to discover, edit cuts and merge operations while observing real hand activities. We evaluate quantitatively and qualitatively our algorithm on two datasets: the GTEA [19], and a new dataset we introduce for Chinese Tea making. Results show our method is able to segment hand-object interaction videos into key step segments with high levels of precision

Paper Session 3: Input & Interaction

Tuesday, 5 October 11:30 CEST UTC+2 Track A

Session Chair: Christian Holz YouTube Stream (non-interactive) Discord Channel for Zoom link and Interactive Q&A Access (registered attendees only) :  Browser , App Post-Session Discussion with Authors in Gathertown Room: Q&A Track A

A Taxonomy of Interaction Techniques for Immersive Augmented Reality based on an Iterative Literature Review

Julia Hertel, Universität Hamburg Sukran Karaosmanoglu, Universität Hamburg Susanne Schmidt, Universität Hamburg Julia Bräker, Universität Hamburg Martin Semmann, Universität Hamburg Frank Steinicke, Universität Hamburg

Developers of interactive systems have a variety of interaction techniques to choose from, each with individual strengths and limitations in terms of the considered task, context, and users. While there are taxonomies for desktop, mobile, and virtual reality applications, augmented reality (AR) taxonomies have not been established yet. However, recent advances in immersive AR technology (i.e., head-worn or projection-based AR), such as the emergence of untethered headsets with integrated gesture and speech sensors, have enabled the inclusion of additional input modalities and, therefore, novel multimodal interaction methods have been introduced. To provide an overview of interaction techniques for current immersive AR systems, we conducted a literature review of publications between 2016 and 2021. Based on 44 relevant papers, we developed a comprehensive taxonomy focusing on two identified dimensions – task and modality. We further present an adaptation of an iterative taxonomy development method to the field of human-computer interaction. Finally, we discuss observed trends and implications for future work.

HPUI: Hand Proximate User Interfaces for One-Handed Interactions on Head Mounted Displays

Shariff AM Faleel, University of Manitoba Michael Gammon, University of Manitoba Kevin Fan, Huawei Canada Da-Yuan Huang, Huawei Canada Wei Li, Huawei Canada Pourang Irani, University of Manitoba

We explore the design of Hand Proximate User Interfaces (HPUIs) for head-mounted displays (HMDs) to facilitate near-body interactions with the display directly projected on, or around the user’s hand. We focus on single-handed input, while taking into consideration the hand anatomy which distorts naturally when the user interacts with the display. Through two user studies, we explore the potential for discrete as well as continuous input. For discrete input, HPUIs favor targets that are directly on the fingers (as opposed to off-finger) as they offer tactile feedback. We demonstrate that continuous interaction is also possible, and is as effective on the fingers as in the off-finger space between the index finger and thumb. We also find that with continuous input, content is more easily controlled when the interaction occurs in the vertical or horizontal axes, and less with diagonal movements. We conclude with applications and recommendations for the design of future HPUIs.

Rotational-constrained optical see-through headset calibration with bare-hand alignment

Xue Hu, Imperial College London Ferdinando Rodriguez Y Baena, Imperial College London Fabrizio Cutolo, University of Pisa

The inaccessibility of user-perceived reality remains an open issue in pursuing the accurate calibration of optical see-through (OST) head-mounted displays (HMDs). Manual user alignment is usually required to collect a set of virtual-to-real correspondences, so that a default or an offline display calibration can be updated to account for the user’s eye position(s). Current alignment-based calibration procedures usually require point-wise alignments between rendered image point(s) and associated physical landmark(s) of a target calibration tool. As each alignment can only provide one or a few correspondences, repeated alignments are required to ensure calibration quality.

This work presents an accurate and tool-less online OST calibration method to update an offline-calibrated eye-display model. The user’s bare hand is markerlessly tracked by a commercial RGBD camera anchored to the OST headset to generate a user-specific cursor for correspondence collection. The required alignment is object-wise, and can provide thousands of unordered corresponding points in tracked space. The collected correspondences are registered by a proposed rotation-constrained iterative closest point (rcICP) method to optimise the viewpoint-related calibration parameters. We implemented such a method for the Microsoft HoloLens 1. The resiliency of the proposed procedure to noisy data was evaluated through simulated tests and real experiments performed with an eye-replacement camera. According to the simulation test, the rcICP registration is robust against possible user-induced rotational misalignment. With a single alignment, our method achieves 8.81 arcmin (1.37 mm) positional error and 1.76 degrees rotational error by camera-based tests in the arm-reach distance, and 10.79 arcmin (7.71 pixels) reprojection error by user tests.

Complex Interaction as Emergent Behaviour: Simulating Mid-Air Text Entry using Reinforcement Learning

Lorenz Hetzel, ETH Zürich John J Dudley, University of Cambridge Anna Maria Feit, Saarland University Per Ola Kristensson, University of Cambridge

Accurately modelling user behaviour has the potential to significantly improve the quality of human-computer interaction. Traditionally, these models are carefully hand-crafted to approximate specific aspects of well-documented user behaviour. This limits their availability in virtual and augmented reality where user behaviour is often not yet well understood. Recent efforts have demonstrated that reinforcement learning can approximate human behaviour during simple goal-oriented reaching tasks. We build on these efforts and demonstrate that reinforcement learning can also approximate user behaviour in a complex mid-air interaction task: typing on a virtual keyboard. We present the first reinforcement learning-based user model for mid-air and surface-aligned typing on a virtual keyboard. Our model is shown to replicate high-level human typing behaviour. We demonstrate that this approach may be used to augment or replace human testing during the validation and development of virtual keyboards.

A Predictive Performance Model for Immersive Interactions in Mixed Reality

Florent Cabric, IRIT Emmanuel Dubois, IRIT Marcos Serrano, IRIT

The design of immersive interaction for mixed reality based on head-mounted displays (HMDs), hereafter referred to as Mixed Reality (MR), is still a tedious task which can hinder the advent of such devices. Indeed, the effects of the interface design on task performance are difficult to anticipate during the design phase: the spatial layout of virtual objects and the interaction techniques used to select those objects can have an impact on task completion time. Besides, testing such interfaces with users in controlled experiments requires considerable time and efforts. To overcome this problem, predictive models, such as the Keystroke-Level Model (KLM), can be used to predict the time required to complete an interactive task at an early stage of the design process. However, so far these models have not been properly extended to address the specific interaction techniques of MR environments. In this paper we propose an extension of the KLM model to interaction performed in MR. First, we propose new operators and experimentally determine the unit times for each of them with a HoloLens v1. Then, we perform experiments based on realistic interaction scenarios to consolidate our model. These experiments confirm the validity of our extension of KLM to predict interaction time in mixed reality environments.

Paper Session 4: Rendering & Display

Tuesday, 5 October 11:30 CEST UTC+2 Track B

Session Chair: Kaan Aksit YouTube Stream (non-interactive) Discord Channel for Zoom link and Interactive Q&A Access (registered attendees only) :  Browser , App Post-Session Discussion with Authors in Gathertown Room: Q&A Track B

AgentDress: Realtime Clothing Synthesis for Virtual Agents using Plausible Deformations

Nannan Wu, Zhejiang University Qianwen Chao, Xidian University Yanzhen Chen, State Key Lab of CAD&CG Weiwei Xu, Zhejiang University Chen Liu, Zhejiang Linctex Digital Technology Co. Dinesh Manocha, University of Maryland Wenxin Sun, Zhejiang University Yi Han, Zhejiang University Xinran Yao, Zhejiang University Xiaogang Jin, Zhejiang University

We present a CPU-based real-time cloth animation method for dressing virtual humans of various shapes and poses. Our approach formulates the clothing deformation as a high-dimensional function of body shape parameters and pose parameters. In order to accelerate the computation, our formulation factorizes the clothing deformation into two independent components: the deformation introduced by body pose variation (Clothing Pose Model) and the deformation from body shape variation (Clothing Shape Model). Furthermore, we sample and cluster the poses spanning the entire pose space and use those clusters to efficiently calculate the anchoring points. We also introduce a sensitivity-based distance measurement to both find nearby anchoring points and evaluate their contributions to the final animation. Given a query shape and pose of the virtual agent, we synthesize the resulting clothing deformation by blending the Taylor expansion results of nearby anchoring points. Compared to previous methods, our approach is general and able to add the shape dimension to any clothing pose model. Furthermore, we can animate clothing represented with tens of thousands of vertices at 50+ FPS on a CPU. We also conduct a user evaluation   and show that our method can improve a user’s perception of dressed   virtual agents in an immersive virtual environment (IVE) compared to a realtime linear blend skinning method.

Perception-Driven Hybrid Foveated Depth of Field Rendering for Head-Mounted Displays

Jingyu Liu, Technical University of Denmark Claire Mantel, Technical University of Denmark Soren Forchhammer, Technical University of Denmark

In this paper, we present a novel perception-driven hybrid rendering method leveraging the limitation of the human visual system (HVS). Features accounted in our model include: foveation from the visual acuity eccentricity (VAE), depth of field (DOF) from vergence & accommodation, and longitudinal chromatic aberration (LCA) from color vision. To allocate computational workload efficiently, first we apply a gaze-contingent geometry simplification. Then we convert the coordinates from screen space to polar space with a scaling strategy coherent with VAE. Upon that, we apply a stochastic sampling based on DOF. Finally, we post-process the Bokeh for DOF, which can at the same time achieve LCA and anti-aliasing. A virtual reality (VR) experiment on 6 Unity scenes with a head-mounted display (HMD) HTC VIVE Pro Eye yields frame rates range from 25.2 to 48.7 fps. Objective evaluation with FovVideoVDP – a perceptual based visible difference metric – suggests that the proposed method gives satisfactory just-objectionable-difference (JOD) scores across 6 scenes from 7.61 to 8.69 (in a 10 unit scheme). Our method achieves better performance compared with the existing methods while having the same or better level of quality scores.

Long-Range Augmented Reality with Dynamic Occlusion Rendering

Mikhail Sizintsev, SRI International Niluthpol Chowdhury Mithun, SRI International Han-Pang Chiu, SRI International Supun Samarasekera, SRI International Rakesh Kumar, SRI International

Proper occlusion based rendering is very important to achieve realism in all indoor and outdoor Augmented Reality (AR) applications. This paper addresses the problem of fast and accurate dynamic occlusion reasoning by real objects in the scene for large scale outdoor AR applications. Conceptually, proper occlusion reasoning requires an estimate of depth for every point in augmented scene which is technically hard to achieve for outdoor scenarios, especially in the presence of moving objects. We propose a method to detect and automatically infer the depth for real objects in the scene without explicit detailed scene modeling and depth sensing (e.g. without using sensors such as 3D-LiDAR). Specifically, we employ instance segmentation of color image data to detect real dynamic objects in the scene and use either a top-down terrain elevation model or deep learning based monocular depth estimation model to infer their metric distance from the camera for proper occlusion reasoning in real time. The realized solution is implemented in a low latency real-time framework for video-see-though AR and is directly extendable to optical-see-through AR. We minimize latency in depth reasoning and occlusion rendering by doing semantic object tracking and prediction in video frames.

Scan&Paint: Image-based Projection Painting

Vanessa Klein, Friedrich-Alexander-University Erlangen-Nuremberg Markus Leuschner, Friedrich-Alexander-University Erlangen-Nuremberg Tobias Langen, Friedrich-Alexander-University Erlangen-Nuremberg Philipp Kurth, Friedrich-Alexander-University Erlangen-Nuremberg Marc Stamminger, Friedrich-Alexander-University Erlangen-Nuremberg Frank Bauer, Friedrich-Alexander-University Erlangen-Nuremberg

We present a pop-up projection painting system that projects onto an unknown three-dimensional surface, while the user creates the projection content on the fly. The digital paint is projected immediately and follows the object if it is moved. If unexplored surface areas are thereby exposed, an automated trigger system issues new depth recordings that expand and refine the surface estimate. By intertwining scanning and projection painting we scan the exposed surface at the appropriate time and only if needed. Like image-based rendering, multiple automatically recorded depth maps are fused in screen space to synthesize novel views of the object, making projection poses independent from the scan positions. Since the user’s digital paint is also stored in images, we eliminate the need to reconstruct and parametrize a single full mesh, which makes geometry and color updates simple and fast.

Gaze-Contingent Retinal Speckle Suppression in Holographic Displays

Praneeth Chakravarthula, UNC Chapel Hill Zhan Zhang, University of Science and Technology of China Okan Tarhan Tursun, Università della Svizzera italiana (USI) Piotr Didyk, University of Lugano Qi Sun, New York University Henry Fuchs, University of North Carolina at Chapel Hill

Computer-generated holographic (CGH) displays show great potential and are emerging as the next-generation displays for augmented and virtual reality, and automotive heads-up displays. One of the critical problems harming the wide adoption of such displays is the presence of speckle noise inherent to holography, that compromises its quality by introducing perceptible artifacts. Although speckle noise suppression has been an active research area, the previous works have not considered the perceptual characteristics of the Human Visual System (HVS), which receives the final displayed imagery. However, it is well studied that the sensitivity of the HVS is not uniform across the visual field, which has led to gaze-contingent rendering schemes for maximizing the perceptual quality in various computer-generated imagery. Inspired by this, we present the first method that reduces the “perceived speckle noise” by integrating foveal and peripheral vision characteristics of the HVS, along with the retinal point spread function, into the phase hologram computation. Specifically, we introduce the anatomical and statistical retinal receptor distribution into our computational hologram optimization, which places a higher priority on reducing the perceived foveal speckle noise while being adaptable to any individual’s optical aberration on the retina. Our method demonstrates superior perceptual quality on our emulated holographic display. Our evaluations with objective measurements and subjective studies demonstrate a significant reduction of the human perceived noise.

Paper Session 5: Avatars

Tuesday, 5 October 16:00 CEST UTC+2 Track A

Session Chair: Bobby Bodenheimer YouTube Stream (non-interactive) Discord Channel for Zoom link and Interactive Q&A Access (registered attendees only) :  Browser , App Post-Session Discussion with Authors in Gathertown Room: Q&A Track A

The Effects of Virtual Avatar Visibility on Pointing Interpretation by Observers in 3D Environments

Brett Benda, University of Florida Eric Ragan, University of Florida

Avatars are often used to provide representations of users in 3D environments, such as desktop games or VR applications. While full-body avatars are often sought to be used in applications, low visibility avatars (i.e., head and hands) are often used in a variety of contexts, either as intentional design choices, for simplicity in contexts where full-body avatars are not needed, or due to external limitations. Avatar style can also vary from more simplistic and abstract to highly realistic depending on application context and user choices. We present the results of two desktop experiments that examine avatar visibility, style, and observer view on accuracy in a pointing interpretation task. Significant effects of visibility were found, with effects varying between horizontal and vertical components of error, and error amounts not always worsening as a result of lowering visibility. Error due to avatar visibility was much smaller than error resulting from avatar style or observer view. Our findings suggest that humans are reasonably able to understand pointing gestures with a limited observable body.

Diegetic Representations for Seamless Cross-Reality Interruptions

Matt Gottsacker, University of Central Florida Nahal Norouzi, University of Central Florida Kangsoo Kim, University of Central Florida Gerd Bruder, University of Central Florida Greg Welch, University of Central Florida

The closed design of virtual reality (VR) head-mounted displays substantially limits users’ awareness of their real-world surroundings. This presents challenges when another person in the same physical space needs to interrupt the VR user for a brief conversation. Such interruptions, e.g., tapping a VR user on the shoulder, can cause a disruptive break in presence (BIP), which affects their place and plausibility illusions, and may cause a drop in performance of their virtual activity. Recent findings related to the concept of diegesis, which denotes the internal consistency of an experience/story, suggest potential benefits of integrating registered virtual representations for physical interactors, especially when these appear internally consistent in VR. In this paper, we present a human-subject study we conducted to compare and evaluate five different diegetic and non-diegetic methods to facilitate cross-reality interruptions in a virtual office environment, where a user’s task was briefly interrupted by a physical person. We created a Cross-Reality Interaction Questionnaire (CRIQ) to capture the quality of the interaction from the VR user’s perspective. Our results show that the diegetic representations afforded reasonably high senses of co-presence, the highest quality interactions, the highest place illusions, and caused the least disruption of the participants’ virtual experiences. We discuss our findings as well as implications for practical applications that aim to leverage virtual representations to ease cross-reality interruptions.

Avatars for Teleconsultation: Effects of Avatar Embodiment Techniques on User Perception in 3D Asymmetric Telepresence

Kevin Yu, Technische Universität München Gleb Gorbachev, Technische Universität München Ulrich Eck, Technische Universitaet Muenchen Frieder Pankratz, LMU Nassir Navab, Technische Universität München Daniel Roth, Computer Aided Medical Procedures and Augmented Reality

A 3D Telepresence system allows users to interact with each other in a virtual, mixed, or augmented reality (VR, MR, AR) environment, creating a shared space for collaboration and communication. There are two main methods for representing users within these 3D environments. Users can be represented either as point cloud reconstruction-based avatars that resemble a physical user or as virtual character-based avatars controlled by tracking the users’ body motion. This work compares both techniques to identify the differences between user representations and their fit in the reconstructed environments regarding the perceived presence, uncanny valley factors, and behavior impression. Our study uses an asymmetric VR/AR teleconsultation system that allows a remote user to join a local scene using VR. The local user observes the remote user with an AR head-mounted display, leading to facial occlusions in the 3D reconstruction. Participants perform a warm-up interaction task followed by a goal-directed collaborative puzzle task, pursuing a common goal. The local user was represented either as a point cloud reconstruction or as a virtual character-based avatar, in which case the point cloud reconstruction of the local user was masked. Our results show that the point cloud reconstruction-based avatar was superior to the virtual character avatar regarding perceived co-presence, social presence, behavioral impression, and humanness. Further, we found that the task type partly affected the perception. The point cloud reconstruction-based approach led to higher usability ratings, while objective performance measures showed no significant difference. We conclude that despite partly missing facial information, the point cloud-based reconstruction resulted in better conveyance of the user behavior and a more coherent fit into the simulation context.

AlterEcho: Loose Avatar-Streamer Coupling for Expressive VTubing

Man To Tang, Purdue University Victor Long Zhu, Purdue University Voicu Popescu, Purdue University

VTubers are live streamers who embody computer animation virtual avatars. VTubing is a rapidly rising form of online entertainment in East Asia, most notably in Japan and China, and it has been more recently introduced in the West. However, animating an expressive VTuber avatar remains a challenge due to budget and usability limitations of current solutions, i.e., high-fidelity motion capture is expensive, while keyboard-based VTubing interfaces impose a cognitive burden on the streamer. This paper proposes a novel approach for VTubing animation based on the key principle of loosening the coupling between the VTuber and their avatar, and it describes a first implementation of the approach in the AlterEcho VTubing animation system. AlterEcho generates expressive VTuber avatar animation automatically, without the streamer’s explicit intervention; it breaks the strict tethering of the avatar from the streamer, allowing the avatar’s nonverbal behavior to deviate from that of the streamer. Without the complete independence of a true alter ego, but also without the constraint of mirroring the streamer with the fidelity of an echo, AlterEcho produces avatar animations that have been rated significantly higher by VTubers and viewers (N = 315) compared to animations created using simple motion capture, or using VMagicMirror, a state-of-the-art keyboard-based VTubing system. Our work also opens the door to personalizing the avatar persona for individual viewers.

Varying User Agency Alongside Interaction Opportunities in a Home Mobile Mixed Reality Story

Gideon Raeburn, Queen Mary University of London Laurissa Tokarchuk, Queen Mary University of London

New opportunities for immersive storytelling experiences have arrived through the technology in mobile phones, including the ability to overlay or register digital content on a user’s real world surroundings, to greater immerse the user in the world of the story. This raises questions around the methods and freedom to interact with the digital elements, that will lead to a more immersive and engaging experience. To investigate these areas the Augmented Virtuality (AV) mobile phone application Home Story was developed for iOS devices. It allows a user to move and interact with objects in a virtual environment displayed on their phone, by physically moving in the real world, completing particular actions to progress a story. A mixed methods study with Home Story either guided participants to the next interaction, or offered them increased agency to choose what object to interact with next. Virtual objects could also be interacted with in one of three ways; imagining the interaction, an embodied interaction using the user’s free hand, or a virtual interaction performed on the phone’s touchscreen. Similar levels of immersion were recorded across both study conditions suggesting both can be effective, though highlighting different issues in each case. The embodied free hand interactions proved particularly memorable, though further work is required to improve their implementation, arising from their novelty and lack of familiarity.

Paper Session 6: Navigation & Training

Tuesday, 5 October 16:00 CEST UTC+2 Track B

Session Chair: Qi Sun YouTube Stream (non-interactive) Discord Channel for Zoom link and Interactive Q&A Access (registered attendees only) :  Browser , App Post-Session Discussion with Authors in Gathertown Room: Q&A Track B

The Cognitive Loads and Usability of Target-based and Steering-based Travel Techniques

Chengyuan Lai, The University of Texas at Dallas Xinyu Hu, University of Central Florida Afham Aiyaz, University of Texas at Dallas Ann K Segismundo, New York University Ananya A Phadke, The Hockaday School Ryan P. McMahan, University of Central Florida

Target and steering-based techniques are two common approaches to travel in consumer VR applications. In this paper, we present two within-subject studies that employ a prior dual-task methodology to evaluate and compare the cognitive loads, travel performances, and simulator sickness of three common target-based travel techniques and three common steering-based travel techniques. We also present visual meta-analyses comparing our results to prior results using the same dual-task methodology. Based on our results and meta-analyses, we present several design suggestions for travel techniques based on various aspects of user experiences.

Understanding, Modeling and Simulating Unintended Positional Drift during Repetitive Steering Navigation Tasks in Virtual Reality

Hugo Brument, INRIA Gerd Bruder, University of Central Florida Maud Marchal, INRIA Anne-Hélène Olivier, University of Rennes Ferran Argelaguet Sanz, INRIA

Virtual steering techniques enable users to navigate in larger Virtual Environments (VEs) than the physical workspace available. Even though these techniques do not require physical movement of the users (e.g. using a joystick and the head orientation to steer towards a virtual direction), recent work observed that users might unintentionally move in the physical workspace while navigating, resulting in Unintended Positional Drift (UPD). This phenomenon can be a safety issue since users may unintentionally reach the physical boundaries of the workspace while using a steering technique. In this context, as a necessary first step to improve the design of navigation techniques minimizing the UPD, this paper aims at analyzing and modeling the UPD during a virtual navigation task. In particular, we characterize and analyze the UPD for a dataset containing the positions and orientations of eighteen users performing a virtual slalom task using virtual steering techniques. Participants wore a head-mounted display and had to follow three different sinusoidal-like trajectories (with low, medium and high curvature) using a torso-steering navigation technique. We analyzed the performed motions and proposed two UPD models: the first based on a linear regression analysis and the second based on a Gaussian Mixture Model (GMM) analysis. Then, we assessed both models through a simulation-based evaluation where we reproduced the same navigation task using virtual agents. Our results indicate the feasibility of using simulation-based evaluations to study UPD. The paper concludes with a discussion of potential applications of the results in order to gain a better understanding of UPD during steering and therefore improve the design of navigation techniques by compensating for UPD.

Redirected Walking in Static and Dynamic Scenes Using Visibility Polygons

Niall L. Williams, University of Maryland Aniket Bera, University of Maryland Dinesh Manocha, University of Maryland

We present a new approach for redirected walking in static and dynamic scenes that uses techniques from robot motion planning to compute the redirection gains that steer the user on collision-free paths in the physical space. Our first contribution is a mathematical framework for redirected walking using concepts from motion planning and configuration spaces. This framework highlights various geometric and perceptual constraints that tend to make collision-free redirected walking difficult. We use our framework to propose an efficient solution to the redirection problem that uses the notion of visibility polygons to compute the free spaces in the physical environment and the virtual environment. The visibility polygon provides a concise representation of the entire space that is visible, and therefore walkable, to the user from their position within an environment. Using this representation of walkable space, we apply redirected walking to steer the user to regions of the visibility polygon in the physical environment that closely match the region that the user occupies in the visibility polygon in the virtual environment. We show that our algorithm is able to steer the user along paths that result in significantly fewer resets than existing state-of-the-art algorithms in both static and dynamic scenes. Our project website is available at https://gamma.umd.edu/vis_poly/ .

Using Multi-Level Precueing to Improve Performance in Path-Following Tasks in Virtual Reality

Jen-Shuo Liu, Columbia University Carmine Elvezio, Columbia University Barbara Tversky, Columbia University Steven Feiner, Columbia University

Work on VR and AR task interaction and visualization paradigms has typically focused on providing information about the current step (a cue) immediately before or during its performance. Some research has also shown benefits to simultaneously providing information about the next step (a precue). We explore whether it would be possible to improve efficiency by precueing information about multiple upcoming steps before completing the current step. To accomplish this, we developed a remote VR user study comparing task completion time and subjective metrics for different levels and styles of precueing in a path-following task. Our visualizations vary the precueing level (number of steps precued in advance) and style (whether the path to a target is communicated through a line to the target, and whether the place of a target is communicated through graphics at the target). Participants in our study performed best when given two to three precues for visualizations using lines to show the path to targets. However, performance degraded when four precues were used. On the other hand, participants performed best with only one precue for visualizations without lines, showing only the places of targets, and performance degraded when a second precue was given. In addition, participants performed better using visualizations with lines than ones without lines.

Personal Identifiability of User Tracking Data During VR Training

Alec G Moore, University of Central Florida Ryan P. McMahan, University of Central Florida Hailiang Dong, University of Texas at Dallas Nicholas Ruozzi, University of Texas at Dallas

Recent research indicates that user tracking data from virtual reality (VR) experiences can be used to personally identify users with degrees of accuracy as high as 95%. However, these results indicating that VR tracking data should be understood as personally identifying data were based on observing 360° videos. In this paper, we present results based on sessions of user tracking data from an ecologically valid VR training application, which indicate that the prior claims may not be as applicable for identifying users beyond the context of observing 360° videos. Our results indicate that the degree of identification accuracy notably decreases between VR sessions. Furthermore, we present results indicating that user tracking data can be obfuscated by encoding positional data as velocity data, which has been successfully used to predict other user experience outcomes like simulator sickness and knowledge acquisition. These results, which show identification accuracies were reduced by more than half, indicate that velocity-based encoding can be used to reduce identifiability and help protect personal identifying data.

Paper Session 7: Modeling

Wednesday, 6 October 9:30 CEST UTC+2 Track A

Session Chair: Shohei Mori YouTube Stream (non-interactive) Discord Channel for Zoom link and Interactive Q&A Access (registered attendees only) :  Browser , App Post-Session Discussion with Authors in Gathertown Room: Q&A Track A

Mobile3DScanner: An Online 3D Scanner for High-quality Object Reconstruction with a Mobile Device

Xiaojun Xiang, Sensetime Research Hanqing Jiang, Sensetime Research Guofeng Zhang, Computer Science College Yihao Yu, Sensetime Research Chenchen Li, Sensetime Research Xingbin Yang, Sensetime Research Danpeng Chen, Sensetime Research Hujun Bao, Zhejiang University

We present a novel online 3D scanning system for high-quality object reconstruction with a mobile device, called Mobile3DScanner. Using a mobile device equipped with an embedded RGBD camera, our system provides online 3D object reconstruction capability for users to acquire high-quality textured 3D object models. Starting with a simultaneous pose tracking and TSDF fusion module, our system allows users to scan an object with a mobile device to get a 3D model for real-time preview. After the real-time scanning process is completed, the scanned 3D model is globally optimized and mapped with multi-view textures as an efficient post-process to get the final textured 3D model on the mobile device. Unlike most existing state-of-the-art systems which can only scan homeware objects such as toys with small dimensions due to the limited computation and memory resources of mobile platforms, our system can reconstruct objects with large dimensions such as statues. We propose a novel visual-inertial ICP approach to achieve real-time accurate 6DoF pose tracking of each incoming frame on the front end, while maintaining a keyframe pool on the back end where the keyframe poses are optimized by local BA. Simultaneously, the keyframe depth maps are fused by the optimized poses to a TSDF model in real-time. Especially, we propose a novel adaptive voxel resizing strategy to solve the out-of-memory problem of large dimension TSDF fusion on mobile platforms. In the post-process, the keyframe poses are globally optimized and the keyframe depth maps are optimized and fused to obtain a final object model with more accurate geometry. The experiments with quantitative and qualitative evaluation demonstrate the effectiveness of the proposed 3D scanning system based on a mobile device, which can successfully achieve online high-quality 3D reconstruction of natural objects with larger dimensions for efficient AR content creation.

Parametric Model Estimation for 3D Clothed Humans from Point Clouds

Kangkan Wang, Nanjing University of Science and Technology Huayu Zheng, Nanjing University of Science and Technology Guofeng Zhang, Computer Science College Jian Yang, Nanjing University of Science and Technology

This paper presents a novel framework to estimate parametric models for 3D clothed humans from partial point clouds. It is a challenging problem due to factors such as arbitrary human shape and pose, large variations in clothing details, and significant missing data. Existing methods mainly focus on estimating the parametric model of undressed bodies or reconstructing the non-parametric 3D shapes from point clouds. In this paper, we propose a hierarchical regression framework to learn the parametric model of detailed human shapes from partial point clouds of a single depth frame. Benefiting from the favorable ability of deep neural networks to model nonlinearity, the proposed framework cascades several successive regression networks to estimate the parameters of detailed 3D human body models in a coarse-to-fine manner. Specifically, the first global regression network extracts global deep features of point clouds to obtain an initial estimation of the undressed human model. Based on the initial estimation, the local regression network then refines the undressed human model by using the local features of neighborhood points of human joints. Finally, the clothing details are inferred as an additive displacement on the refined undressed model using the vertex-level regression network. The experimental results demonstrate that the proposed hierarchical regression approach can accurately predict detailed human shapes from partial point clouds and outperform prior works in the recovery accuracy of 3D human models.

BuildingSketch: Freehand Mid-Air Sketching for Building Modeling

Zhihao Liu, Chinese Academy of Sciences Fanxing Zhang, Shenzhen Institutes of Advanced Technology Zhanglin Cheng, Shenzhen Institutes of Advanced Technology

Advancements in virtual reality (VR) technology enable us to rethink the way of interactive 3D modeling – intuitively creating 3D content directly in 3D space. However, conventional VR-based modeling is laborious and tedious to generate a detailed 3D model in full manual mode since users need to carefully draw almost the entire surface. In this paper, we present a freehand mid-air sketching system with the aid of deep learning techniques for modeling structured buildings, where the user freely draws a few key strokes in mid-air using his/her fingers to represent the desired shapes and our system automatically interprets the strokes using a deep neural network and generates a detailed building model based on a procedural modeling method. After creating several building blocks one by one, the user can freely move, rotate, and combine the blocks to form a complex building model. We demonstrate the ease of use for novice users, effectiveness, and efficiency of our sketching system, BuildingSketch, by presenting a variety of building models.

BDLoc: Global Localization from 2.5D Building Map

Hai Li, Zhejiang University Tianxing Fan, Zhejiang University Hongjia Zhai, Zhejiang University Zhaopeng Cui, Zhejiang University Hujun Bao, Zhejiang University Guofeng Zhang, Zhejiang University

Robust and accurate global 6DoF localization is essential for many applications, i.e., augmented reality and autonomous driving. Most existing 6DoF visual localization approaches need to build a dense texture model in advance, which is computationally extensive and almost infeasible in the global range. In this work, we propose BDLoc, a hierarchical global localization framework via the 2.5D building map, which is able to estimate the accurate pose of the query street-view image without using detailed dense 3D model and texture information. Specifically speaking, we first extract the 3D building information from the street-view image and surrounding 2.5D building map, and then solve a coarse relative pose by local to global registration. In order to improve the feature extraction, we propose a novel SPG-Net which is able to capture both local and global features. Finally, an iterative semantic alignment is applied to obtain a finner result with the differentiable rendering and the cross-view semantic constraint. Except for a coarse longitude and latitude from GPS, BDLoc doesn’t need any additional information like altitude and orientation that are necessary for many previous works. We also create a large dataset to explore the performance of the 2.5D map-based localization task. Extensive experiments demonstrate the superior performance of our method.

Distortion-aware room layout estimation from a single fisheye image

Ming Meng, Beihang University Likai Xiao, Beihang University Yi Zhou, Beijing BigView Technology Co. Ltd Zhaoxin Li, Chinese Academy of Sciences Zhong Zhou, Beihang University

Omnidirectional images of 180 or 360 field of view provide the entire visual content around the capture cameras, giving rise to more sophisticated scene understanding and reasoning and bringing broad application prospects for VR/AR/MR. As a result, researches on omnidirectional image layout estimation have sprung up in recent years. However, existing layout estimation methods designed for panorama images cannot perform well on fisheye images, mainly due to lack of public fisheye dataset as well as the significantly differences in the positions and degree of distortions caused by different projection models. To fill theses gaps, in this work we first reuse the released large-scale panorama datasets and reproduce them to fisheye images via projection conversion, thereby circumventing the challenge of obtaining high-quality fisheye datasets with ground truth layout annotations. Then, we propose a distortion-aware module according to the distortion of the orthographic projection (i.e., OrthConv) to perform effective features extraction from fisheye images. Additionally, we exploit bidirectional LSTM with two-dimensional step mode for horizontal and vertical prediction to capture the long-range geometric pattern of the object for the global coherent predictions even with occlusion and cluttered scenes. We extensively evaluate our deformable convolution for room layout estimation task. In comparison with state-of-the-art approaches, our approach produces considerable performance gains in real-world dataset as well as in synthetic dataset. This technology provides high-efficiency and low-cost technical implementations for VR house viewing and MR video surveillance. We present an MR-based building video surveillance scene equipped with nine fisheye lens can achieve an immersive hybrid display experience, which can be used for intelligent building management in the future.

Paper Session 8: Redirected Walking & Locomotion

Wednesday, 6 October 9:30 CEST UTC+2 Track B

Session Chair: Ferran Argelaguet YouTube Stream (non-interactive) Discord Channel for Zoom link and Interactive Q&A Access (registered attendees only) :  Browser , App Post-Session Discussion with Authors in Gathertown Room: Q&A Track B

OpenRDW: A Redirected Walking Library and Benchmark with Multi-User, Learning-based Functionalities and State-of-the-art Algorithms

Yi-Jun  Li, Beihang University Miao  Wang, Beihang University Prof. Dr. Frank  Steinicke, Universität Hamburg Qinping  Zhao, Beihang University

Redirected walking (RDW) is a locomotion technique that guides users on virtual paths, which might vary from the paths they physically walk in the real world. Thereby, RDW enables users to explore a virtual space that is larger than the physical counterpart with near-natural walking experiences. Several approaches have been proposed and developed; each using individual platforms and evaluated on a custom dataset, making it challenging to compare between methods. However, there are seldom public toolkits and recognized benchmarks in this field. In this paper, we introduce OpenRDW, an open-source library and benchmark for developing, deploying and evaluating a variety of methods for walking path redirection. The OpenRDW library provides application program interfaces to access the attributes of scenes, to customize the RDW controllers, to simulate and visualize the navigation process, to export multiple formats of the results, and to evaluate RDW techniques. It also supports the deployment of multi-user real walking, as well as reinforcement learning-based models exported from TensorFlow or PyTorch. The OpenRDW benchmark includes multiple testing conditions, such as walking in size varied tracking spaces or shape varied tracking spaces with obstacles, multiple user walking, etc. On the other hand, procedurally generated paths and walking paths collected from user experiments are provided for a comprehensive evaluation. It also contains several classic and state-of-the-art RDW techniques, which include the above mentioned functionalities.

Redirected Walking using Continuous Curvature Manipulation

Hiroaki Sakono, The University of Tokyo Keigo Matsumoto, The University of Tokyo Takuji Narumi, The University of Tokyo Hideaki Kuzuoka, The University of Tokyo

In this paper, we propose a novel redirected walking (RDW) technique that applies dynamic bending and curvature gains so that users perceive less discomfort than existing techniques that apply constant gains.  

Humans are less likely to notice continuous changes than those that are sudden. Therefore, instead of applying constant bending or curvature gains to users, we propose a dynamic method that continuously changes the gains. We conduct experiments to investigate the effect of dynamic gains in bending and curvature manipulation with regards to discomfort. The experimental results show that the proposed method significantly suppresses discomfort by up to 16 and 9% for bending and curvature manipulations, respectively.

A Reinforcement Learning Approach to Redirected Walking with Passive Haptics

Ze-Yin Chen, Beihang University Yi-Jun Li, Beihang University Miao Wang, Beihang University Frank Steinicke, Universität Hamburg Qinping Zhao, Beihang University

Various redirected walking (RDW) techniques have been proposed, which unwittingly manipulate the mapping from the user’s physical locomotion to motions of the virtual camera. Thereby, RDW techniques guide users on physical paths with the goal to keep them inside a limited tracking area, whereas users perceive the illusion of being able to walk infinitely in the virtual environment. However, the inconsistency between the user’s virtual and physical location hinders passive haptic feedback when the user interacts with virtual objects, which are represented by physical props in the real environment.

In this paper, we present a novel reinforcement learning approach towards RDW with passive haptics. With a novel dense reward function, our method learns to jointly consider physical boundary avoidance and consistency of user-object positioning between virtual and physical spaces. The weights of reward and penalty terms in the reward function are dynamically adjusted to adaptively balance term impacts during the walking process. Experimental results demonstrate the advantages of our technique in comparison to previous approaches. Finally, the code of our technique is provided as an open-source solution.

Redirected Walking Using Noisy Galvanic Vestibular Stimulation

Keigo Matsumoto, The University of Tokyo Kazuma Aoyama, The University of Tokyo Takuji Narumi, The University of Tokyo Hideaki Kuzuoka, The University of Tokyo

In this study, considering the characteristics of multisensory integration, we examined a method for improving redirected walking (RDW) by adding noise to the vestibular system to reduce the effects of vestibular inputs on self-motion perception. In RDW, the contradiction between vestibular inputs and visual sensations may make users notice the RDW manipulation, resulting in discomfort throughout the experience. Because humans integrate multisensory information by considering the reliability of each modality, by reducing the effects of vestibular inputs on self-motion perception, it is possible to suppress awareness of and discomfort during RDW manipulation and improve the effectiveness of the manipulation. Therefore, we hypothesized that adding noise to the vestibular inputs would reduce the reliability of the vestibular sensations and enhances the effectiveness of RDW by improving the relative reliability of vision. In this study, we used noisy galvanic vestibular stimulation (GVS) to reduce the reliability of vestibular inputs. GVS is a method of stimulating vestibular organs and nerves by applying small electrical currents to the bilateral mastoid. To reduce the reliability of vestibular inputs, we employed noisy GVS whose current pattern is white noise. We experimented with comparing the threshold of curvature gains between noisy GVS conditions and a control condition.

RNIN-VIO: Robust Neural Inertial Navigation Aided Visual-Inertial Odometry in Challenging Scenes

Danpeng Chen, Computer Science College Nan Wang, Sensetime Runsen Xu, Zhejiang University Weijian Xie, Computer Science College Hujun Bao, Zhejiang University Guofeng Zhang, Computer Science College

In this work, we propose a tightly-coupled EKF framework for visual-inertial odometry with NIN (Neural Inertial Navigation) aided. Traditional VIO systems are fragile in challenging scenes with weak or confusing visual information, such as weak/repeated texture, dynamic environment, fast camera motion with serious motion blur, etc. It is extremely difficult for a vision-based algorithm to handle these problems. So we firstly design a robust deep learning based inertial network~(called RNIN), using only IMU measurements as input. RNIN is significantly more robust in challenging scenes than traditional VIO systems. In order to take full advantage of vision-based algorithms in AR/VR areas, we further develop a multi-sensor fusion system RNIN-VIO, which tightly couples the visual, IMU and NIN measurements. Our system performs robustly in extremely challenging conditions, with high precision both in trajectories and AR effects. The experimental results of evaluation on dataset evaluation and online AR demo demonstrate the superiority of the proposed system in robustness and accuracy.

PAVAL: Position-Aware Virtual Agent Locomotion for Assisted VR Navigation

Ziming Ye, Beihang University Junlong Chen, Beihang University Miao Wang, Beihang University Yong-Liang Yang, University of Bath

Virtual agents are typical assistance tools for navigation and interac-tion in Virtual Reality (VR) tour, training, education, etc. It has beendemonstrated that the gaits, gestures, gazes, and positions of virtualagents are major factors that affect the user’s perception and experi-ence for seated and standing VR. In this paper, we present a novelposition-aware virtual agent locomotion method, called PAVAL, thatcan perform virtual agent positioning (position+orientation) in realtime for room-scale VR navigation assistance. We first analyze de-sign guidelines for virtual agent locomotion and model the problemusing the positions of the user and the surrounding virtual objects.Then we conduct a one-off preliminary study to collect subjectivedata and present a model for virtual agent positioning predictionwith fixed user position. Based on the model, we propose an algo-rithm to optimize the object of interest, virtual agent position, andvirtual agent orientation in sequence for virtual agent locomotion.As a result, during user navigation in a virtual scene, the virtualagent automatically moves in real time and introduces virtual objectinformation to the user. We evaluate PAVAL and two alternativemethods via a user study with humanoid virtual agents in variousscenes, including virtual museum, factory, and school gym. Theresults reveal that our method is superior to the baseline condition.

Paper Session 9: Session Frameworks & Datasets

Wednesday, 6 October 16:00 CEST UTC+2 Track A

Session Chair: Benjamin Weyers YouTube Stream (non-interactive) Discord Channel for Zoom link and Interactive Q&A Access (registered attendees only) :  Browser , App Post-Session Discussion with Authors in Gathertown Room: Q&A Track A

TEyeD: Over 20 million real-world eye images with Pupil, Eyelid, and Iris 2D and 3D Segmentations, 2D and 3D Landmarks, 3D Eyeball, Gaze Vector, and Eye Movement Types

Wolfgang Fuhl, Wilhelm Schickard Institut Gjergji Kasneci, University of Tubingen Enkelejda Kasneci, University of Tubingen

We present TEyeD, the world’s largest unified public data set of eye images taken with head-mounted devices. TEyeD was acquired with seven different head-mounted eye trackers. Among them, two eye trackers were integrated into virtual reality (VR) or augmented reality (AR) devices. The images in TEyeD were obtained from various tasks, including car rides, simulator rides, outdoor sports activities, and daily indoor activities. The data set includes 2D\&3D landmarks, semantic segmentation, 3D eyeball annotation and the gaze vector and eye movement types for all images. Landmarks and semantic segmentation are provided for the pupil, iris and eyelids. Video lengths vary from a few minutes to several hours. With more than 20 million carefully annotated images, TEyeD provides a unique, coherent resource and a valuable foundation for advancing research in the field of computer vision, eye tracking and gaze estimation in modern VR and AR applications. Data and code at:

https://unitc-my.sharepoint.com/:f:/g/personal/iitfu01_cloud_uni-tuebingen_de/EvrNPdtigFVHtCMeFKSyLlUBepOcbX0nEkamweeZa0s9SQ

Supporting Iterative Virtual Reality Analytics Design and Evaluation by Systematic Generation of Surrogate Clustered Datasets

Slawomir Konrad Tadeja, University of Cambridge Patrick Langdon, University of Cambridge Per Ola Kristensson, University of Cambridge

Virtual Reality (VR) is a promising technology platform for immersive visual analytics. However, the design space of VR analytics interface design is vast and difficult to explore using traditional A/B comparisons in formal or informal controlled experiments—a fundamental part of an iterative design process. A key factor that complicates such comparisons is the dataset. Exposing participants to the same dataset in all conditions introduces an unavoidable learning effect. On the other hand, using different datasets for all experimental conditions introduces the dataset itself as an uncontrolled variable, which reduces internal validity to an unacceptable degree. In this paper, we propose to rectify this problem by introducing a generative process for synthesizing clustered datasets for VR analytics experiments. This process generates datasets that are distinct while simultaneously allowing systematic comparisons in experiments. A key advantage is that these datasets can then be used in iterative design processes. In a two-part experiment, we show the validity of the generative process and demonstrate how new insights in VR-based visual analytics can be gained using synthetic datasets.

ARENA: The Augmented Reality Edge Networking Architecture

Nuno Pereira, Carnegie Mellon University Anthony Rowe, Carnegie Mellon University Michael W Farb, Carnegie Mellon University Ivan Liang, Carnegie Mellon University Edward Lu, Carnegie Mellon University Eric Riebling, Carnegie Mellon University

Many have predicted the future of the Web to be the integration of Web content with the real-world through technologies such as Augmented Reality (AR). This has led to the rise of Extended Reality (XR) Web Browsers used to shorten the long AR application development and deployment cycle of native applications especially across different platforms. As XR Browsers mature, we face new challenges related to collaborative and multi-user applications that span users, devices, and machines. These collaborative XR applications require: (1) networking support for scaling to many users, (2) mechanisms for content access control and application isolation, and (3) the ability to host application logic near clients or data sources to reduce application latency. In this paper, we present the design and evaluation of the AR Edge Networking Architecture ARENA which is a platform that simplifies building and hosting collaborative XR applications on WebXR capable browsers. ARENA provides a number of critical components including: a hierarchical geospatial directory service that connects users to nearby servers and content, a token-based authentication system for controlling user access to content, and an application/service runtime supervisor that can dispatch programs across any network connected device. All of the content within ARENA exists as endpoints in a PubSub scene graph model that is synchronized across all users. We evaluate ARENA in terms of client performance as well as benchmark end-to-end response-time as load on the system scales. We show the ability to horizontally scale the system to Internet-scale with scenes containing hundreds of users with latencies on the order of tens of milliseconds. Finally, we highlight projects built using ARENA and showcase how our approach dramatically simplifies collaborative multi-user XR development compared to monolithic approaches.

TransforMR: Pose-Aware Object Substitution for Composing Alternate Mixed Realities

Mohamed Kari, Porsche AG Tobias Grosse-Puppendahl, Porsche AG Luis Falconeri Coelho, Porsche AG Andreas Rene Fender, ETH Zürich David Bethge, Porsche AG Reinhard Schütte, Institute for Computer Science and Business Information Systems Christian Holz, ETH Zürich

Despite the advances in machine perception, semantic scene understanding is still a limiting factor in mixed reality scene composition. In this paper, we present TransforMR, a video see-through mixed reality system for mobile devices that performs 3D-pose-aware object substitution to create meaningful mixed reality scenes. In real-time and for previously unseen and unprepared real-world environments, TransforMR composes mixed reality scenes so that virtual objects assume behavioral and environment-contextual properties of replaced real-world objects. This yields meaningful, coherent, and human-interpretable scenes, not yet demonstrated by today’s augmentation techniques. TransforMR creates these experiences through our novel pose-aware object substitution method building on different 3D object pose estimators, instance segmentation, video inpainting, and pose-aware object rendering. TransforMR is designed for use in the real-world, supporting the substitution of humans and vehicles in everyday scenes, and runs on mobile devices using just their monocular RGB camera feed as input. We evaluated TransforMR with eight participants in an uncontrolled city environment employing different transformation themes. Applications of TransforMR include real-time character animation analogous to motion capturing in professional film making, however without the need for preparation of either the scene or the actor, as well as narrative-driven experiences that allow users to explore fictional parallel universes in mixed reality.

Excite-O-Meter: Software Framework to Integrate Bodily Signals in Virtual Reality Experiments

Luis Quintero, Stockholm University John Edison Muñoz Cardona, University of Waterloo Jeroen de De mooij De mooij, Thefirstfloor.nl Michael Gaebler, Max Planck Institute for Human Cognitive and Brain Sciences

Bodily signals can complement subjective and behavioral measures to analyze human factors, such as user engagement or stress, when interacting with virtual reality (VR) environments. Enabling widespread use of (also the real-time analysis) of bodily signals in VR applications could be a powerful method to design more user-centric, personalized VR experiences. However, technical and scientific challenges (e.g., cost of research-grade sensing devices, required coding skills, expert knowledge needed to interpret the data) complicate the integration of bodily data in existing interactive applications. This paper presents the design, development, and evaluation of an open-source software framework named Excite-O-Meter. It allows existing VR applications to integrate, record, analyze, and visualize bodily signals from wearable sensors, with the example of cardiac activity (heart rate and its variability) from the chest strap Polar H10. Survey responses from 58 potential users determined the design requirements for the framework. Two tests evaluated the framework and setup in terms of data acquisition/analysis and data quality. Finally, we present an example experiment that shows how our tool can be an easy-to-use and scientifically validated tool for researchers, hobbyists, or game designers to integrate bodily signals in VR applications.

Paper Session 10: Applications

Wednesday, 6 October 16:00 CEST UTC+2 Track B

Session Chair: Voicu Popescu YouTube Stream (non-interactive) Discord Channel for Zoom link and Interactive Q&A Access (registered attendees only) :  Browser , App Post-Session Discussion with Authors in Gathertown Room: Q&A Track B

Design and Evaluation of Personalized Percutaneous Coronary Intervention Surgery Simulation System

Shuai Li, Beihang University Jiahao Cui, Beihang University Aimin Hao, Beihang University Shuyang Zhang, Peking Union Medical College Hospital Qinping Zhao, Beihang University

In recent years, medical simulators have been widely applied to a broad range of surgery training tasks. However, most of the existing surgery simulators can only provide limited immersive environments with a few pre-processed organ models, while ignoring the instant modeling of various personalized clinical cases, which brings substantive differences between training experiences and real surgery situations. To this end, we present a virtual reality (VR) based surgery simulation system for personalized percutaneous coronary intervention (PCI). The simulation system can directly take patient-specific clinical data as input and generate virtual 3D intervention scenarios. Specially, we introduce a fiber-based patient-specific cardiac dynamic model to simulate the nonlinear deformation among the multiple layers of the cardiac structure, which can well respect and correlate the atriums, ventricles and vessels, and thus gives rise to more effective visualization and interaction. Meanwhile, we design a tracking and haptic feedback hardware, which can enable users to manipulate physical intervention instruments and interact with virtual scenarios. We conduct quantitative analysis on deformation precision and modeling efficiency, and evaluate the simulation system based on the user studies from 16 cardiologists and 20 intervention trainees, comparing it to traditional desktop intervention simulators. The results confirm that our simulation system can provide a better user experience, and is a suitable platform for PCI surgery training and rehearsal.

Augmented Reality for Subsurface Utility Engineering, Revisited

Lasse Hedegaard Hansen, Aalborg University Philipp Fleck, Graz University of Technology Marco Stranner, Institute for Computer Graphics and Vision Dieter Schmalstieg, Graz University of Technology Clemens Arth, AR4 GmbH

Civil engineering is a primary domain for new augmented reality technologies. In this work, the area of subsurface utility engineering is revisited, and new methods tackling well-known, yet unsolved problems are presented. We describe our solution to the outdoor localization problem, which is deemed one of the most critical issues in outdoor augmented reality, proposing a novel, lightweight hardware platform to generate highly accurate position and orientation estimates in a global context. Furthermore, we present new approaches to drastically improve realism of outdoor data visualizations. First, a novel method to replace physical spray markings by indistinguishable virtual counterparts is described. Second, the visualization of 3D reconstructions of real excavations is presented, fusing seamlessly with the view onto the real environment. We demonstrate the power of these new methods on a set of different outdoor scenarios.

A Compelling Virtual Tour of the Dunhuang Cave With an Immersive Head-Mounted Display

Ping-Hsuan Han, National Taiwan University Yang-Sheng Chen, National Taiwan University Iou-Shiuan Liu, National Taiwan University Yi-Ping Jang, National Taiwan University Ling Tsai, National Taiwan University Alvin Chang, National Taiwan University Yi-Ping Hung, National Taiwan University

Invited  CG&A Paper

The Dunhuang Caves are the home to the largest Buddhist art sites in the world and are listed as a UNESCO World Heritage Site. Over time, the murals have been damaged by both humans and nature. In this article, we present an immersive virtual reality system for exploring spatial cultural heritage, which utilizes the digitized data from the Dunhuang Research Academy to represent the virtual environment of the cave. In this system, the interaction techniques that allow users to flexibly experience any of the artifacts or displays contribute to their understanding of the cultural heritage. Additionally, we evaluated the system by conducting a user study to examine the extent of user acquaintance after the entire experience. Our result has shown what participants learn from the spatial context and augmented information in the VR. This can be used as design considerations for developing other spatial heritages.

The Passenger Experience of Mixed Reality Virtual Display Layouts In Airplane Environments

Alexander Ng, University of Glasgow Daniel Medeiros, University of Glasgow Mark McGill, University of Glasgow Julie R. Williamson, University of Glasgow Stephen Anthony Brewster, University of Glasgow

Augmented / Mixed Reality headsets will in-time see adoption and use in a variety of mobility and transit contexts, allowing users to view and interact with virtual content and displays for productivity and entertainment. However, little is known regarding how multi-display virtual workspaces should be presented in a transit context, nor to what extent the unique affordances of transit environments (e.g. the social presence of others) might influence passenger perception of virtual display layouts. Using a simulated VR passenger airplane environment, we evaluated three different AR-driven virtual display configurations (Horizontal, Vertical, and Focus main display with smaller secondary windows) at two different depths, exploring their usability, user preferences, and the underlying factors that influenced those preferences. We found that the perception of invading other’s personal space significantly influenced preferred layouts in transit contexts. Based on our findings, we reflect on the unique challenges posed by passenger contexts, provide recommendations regarding virtual display layout in the confined airplane environment, and expand on the significant benefits that AR offers over physical displays in said environments.

FLASH: Video AR Anchors for Live Events

Edward Lu, Carnegie Mellon University John Miller, Carnegie Mellon University Nuno Pereira, Carnegie Mellon University Anthony Rowe, Carnegie Mellon University

Public spaces like concert stadiums and sporting arenas are ideal venues for AR content delivery to crowds of mobile phone users. Unfortunately, these environments tend to be some of the most challenging in terms of lighting and dynamic staging for vision-based relocalization. In this paper, we introduce FLASH, a system for delivering AR content within challenging lighting environments that uses active tags (i.e. blinking) with detectable features from passive tags (quads) for marking regions of interest and determining pose. This combination allows the tags to be detectable from long distances with significantly less computational overhead per frame, making it possible to embed tags in existing video displays like large jumbotrons. To aid in pose acquisition, we implement a gravity-assisted pose solver that removes the ambiguous solutions that are often encountered when trying to localize using standard passive tags. We show that our technique outperforms similarly sized passive tags in terms of range by 20-30% and is fast enough to run at 30 FPS even within a mobile web browser on a smartphone.

Paper Session 11: Tracking & Prediction

Wednesday, 6 October 18:00 CEST UTC+2 Track A

Session Chair: Alain Pagani YouTube Stream (non-interactive) Discord Channel for Zoom link and Interactive Q&A Access (registered attendees only) :  Browser , App Post-Session Discussion with Authors in Gathertown Room: Q&A Track A

Simulating Realistic Human Motion Trajectories of Mid-Air Gesture Typing

Junxiao Shen, University of Cambridge John J Dudley, University of Cambridge Per Ola Kristensson, University of Cambridge

The eventual success of many AR and VR intelligent interactive systems relies on the ability to collect user motion data at large scale. Realistic simulation of human motion trajectories is a potential solution to this problem. Simulated user motion data can facilitate prototyping and speed up the design process. There are also potential benefits in augmenting training data for deep learning-based AR/VR applications to improve performance. However, the generation of realistic motion data is nontrivial. In this paper, we examine the specific challenge of simulating index finger movement data to inform mid-air gesture keyboard design. The mid-air gesture keyboard is deployed on an optical see-through display that allows the user to enter text by articulating word gesture patterns with their physical index finger in the vicinity of a visualized keyboard layout. We propose and compare four different approaches to simulating this type of motion data, including a Jerk-Minimization model, a Recurrent Neural Network (RNN)-based generative model, and a Generative Adversarial Network (GAN)-based model with two modes: style transfer and data alteration. We also introduce a procedure for validating the quality of the generated trajectories in terms of realism and diversity. The GAN-based model shows significant potential for generating synthetic motion trajectories to facilitate design and deep learning for advanced gesture keyboards deployed in AR and VR.

Cybersickness Prediction from Integrated HMD's Sensors: A Multimodal Deep Fusion Approach using Eye-tracking and Head-tracking Data

Rifatul Islam, University of Texas at San Antonio John Quarles, University of Texas at San Antonio Kevin Desai, The University of Texas at San Antonio

Cybersickness prediction is one of the significant research challenges for real-time cybersickness reduction. Researchers have proposed different approaches for predicting cybersickness from bio-physiological data (e.g., heart rate, breathing rate, electroencephalogram). However, collecting bio-physiological data often requires external sensors, limiting locomotion and 3D-object manipulation during the virtual reality (VR) experience. Limited research has been done to predict cybersickness from the data readily available from the integrated sensors in head-mounted displays (HMDs) (e.g., head-tracking, eye-tracking, motion features), allowing free locomotion and 3D-object manipulation. This research proposes a novel deep fusion network to predict cybersickness severity from heterogeneous data readily available from the integrated HMD sensors. We extracted 1755 stereoscopic videos, eye-tracking, and head-tracking data along with the corresponding self-reported cybersickness severity collected from 30 participants during their VR gameplay. We applied several deep fusion approaches with the heterogeneous data collected from the participants. Our results suggest that cybersickness can be predicted with an accuracy of 87.77% and a root-mean-square error of 0.51 when using only eye-tracking and head-tracking data. We concluded that eye-tracking and head-tracking data are well suited for a standalone cybersickness prediction framework.

A Comparison of the Fatigue Progression of Eye-Tracked and Motion-Controlled Interaction in Immersive Space

Lukas Maximilian Masopust, University of California, Davis David Bauer, University of California, Davis Siyuan Yao, University of California, Davis Kwan-Liu Ma, University of California, Davis

Eye-tracking enabled virtual reality (VR) headsets have recently become more widely available. This opens up opportunities to incorporate eye gaze interaction methods in VR applications. However, studies on the fatigue-induced performance fluctuations of these new input modalities are scarce and rarely provide a direct comparison with established interaction methods. We conduct a study to compare the selection-interaction performance between commonly used handheld motion control devices and emerging eye interaction technology in VR. We investigate each interaction’s unique fatigue progression pattern in study sessions with ten minutes of continuous engagement. The results support and extend previous findings regarding the progression of fatigue in eye-tracked interaction over prolonged periods. By directly comparing gaze- with motion-controlled interaction, we put the emerging eye-trackers into perspective with the state-of-the-art interaction method for immersive space. We then discuss potential implications for future extended reality (XR) interaction design based on our findings.

DVIO: Depth-Aided Visual Inertial Odometry for RGBD Sensors

Abhishek Tyagi, SOC R&D, Samsung Semiconductor, Inc. Yangwen Liang, SOC R&D, Samsung Semiconductor, Inc. Shuangquan Wang, SOC R&D, Samsung Semiconductor, Inc. Dongwoon Bai, SOC R&D, Samsung Semiconductor, Inc.

In past few years we have observed an increase in the usage of RGBD sensors in mobile devices. These sensors provide a good estimate of the depth map for the camera frame, which can be used in numerous augmented reality applications. This paper presents a new visual inertial odometry (VIO) system, which uses measurements from a RGBD sensor and an inertial measurement unit (IMU) sensor for estimating the motion state of the mobile device. The resulting system is called the depth-aided VIO (DVIO) system. In this system we add the depth measurement as part of the nonlinear optimization process. Specifically, we propose methods to use the depth measurement using one-dimensional (1D) feature parameterization as well as three-dimensional (3D) feature parameterization. In addition, we propose to utilize the depth measurement for estimating time offset between the unsynchronized IMU and the RGBD sensors. Last but not least, we propose a novel block-based marginalization approach to speed up the marginalization processes and maintain the real-time performance of the overall system. Experimental results validate that the proposed DVIO system outperforms the other state-of-the-art VIO systems in terms of trajectory accuracy as well as processing time.

Instant Visual Odometry Initialization for Mobile AR

Alejo Concha Belenguer, Facebook Jesus Briales, Facebook Christian Forster, Facebook Luc Oth, Facebook Michael Burri, Facebook

Mobile AR applications benefit from fast initialization to display world-locked effects instantly. However, standard visual odometry or SLAM algorithms require motion parallax to initialize (see Figure 1) and, therefore, suffer from delayed initialization. In this paper, we present a 6-DoF monocular visual odometry that initializes instantly and without motion parallax. Our main contribution is a pose estimator that decouples estimating the 5-DoF relative rotation and translation direction from the 1-DoF translation magnitude. While scale is not observable in a monocular vision-only setting, it is still paramount to estimate a consistent scale over the whole trajectory (even if not physically accurate) to avoid AR effects moving erroneously along depth. In our approach, we leverage the fact that depth errors are not perceivable to the user during rotation-only motion. However, as the user starts translating the device, depth becomes perceivable and so does the capability to estimate consistent scale. Our proposed algorithm naturally transitions between these two modes. Our second contribution is a novel residual in the relative pose problem to further improve the results. The residual combines the Jacobians of the functional and the functional itself and is minimized using a Levenberg–Marquardt optimizer on the 5-DoF manifold. We perform extensive validations of our contributions with both a publicly available dataset and synthetic data. We show that the proposed pose estimator outperforms the classical approaches for 6-DoF pose estimation used in the literature in low-parallax configurations. Likewise, we show our relative pose estimator outperforms state-of-the-art approaches in an odometry pipeline configuration where we can leverage initial guesses. We release a dataset for the relative pose problem using real data to facilitate the comparison with future solutions for the relative pose problem. Our solution is either used as a full odometry or as a pre-SLAM component of any supported SLAM system (ARKit, ARCore) in world-locked AR effects on platforms such as Instagram and Facebook.

Paper Session 12: XR Experiences & Guidance

Wednesday, 6 October 18:00 CEST UTC+2 Track B

Session Chair: John Quarles YouTube Stream (non-interactive) Discord Channel for Zoom link and Interactive Q&A Access (registered attendees only) :  Browser , App Post-Session Discussion with Authors in Gathertown Room: Q&A Track B

Virtual Animals as Diegetic Attention Guidance Mechanisms in 360-Degree Experiences

Nahal Norouzi, University of Central Florida Gerd Bruder, University of Central Florida Austin Erickson, University of Central Florida Kangsoo Kim, University of Calgary Jeremy N. Bailenson, Stanford University Pamela J. Wisniewski, University of Central Florida Charles E Hughes, University of Central Florida Greg Welch, University of Central Florida

360-degree experiences such as cinematic virtual reality and 360-degree videos are becoming increasingly popular. In most examples, viewers can freely explore the content by changing their orientation. However, in some cases, this increased freedom may lead to viewers missing important events within such experiences. Thus, a recent research thrust has focused on studying mechanisms for guiding viewers’ attention while maintaining their sense of presence and fostering a positive user experience. One approach is the utilization of diegetic mechanisms, characterized by an internal consistency with respect to the narrative and the environment, for attention guidance. While such mechanisms are highly attractive, their uses and potential implementations are still not well understood. Additionally, acknowledging the user in 360-degree experiences has been linked to a higher sense of presence and connection. However, less is known when acknowledging behaviors are carried out by attention guiding mechanisms. To close these gaps, we conducted a within-subjects user study with five conditions of no guide and virtual arrows, birds, dogs, and dogs that acknowledge the user and the environment.   Through our mixed-methods analysis, we found that the diegetic virtual animals resulted in a more positive user experience, all of which were at least as effective as the non-diegetic arrow in guiding users towards target events. The acknowledging dog received the most positive responses from our participants in terms of preference and user experience and significantly improved their sense of presence compared to the non-diegetic arrow. Lastly, three themes emerged from a qualitative analysis of our participants’ feedback, indicating the importance of the guide’s blending in, its acknowledging behavior, and participants’ positive associations as the main factors for our participants’ preferences.

Measuring the Perceived Three-Dimensional Location of Virtual Objects in Optical See-Through Augmented Reality

Farzana Alam Khan, Mississippi State University Veera Venkata Ram Murali Krishna Rao Muvva, University of Nebraska–Lincoln Dennis Wu, Mississippi State University Mohammed Safayet Arefin, Mississippi State University Nate Phillips, Mississippi State University J. Edward Swan II, Mississippi State University

For optical see-through augmented reality (AR), a new method for measuring the perceived three-dimensional location of virtual objects is presented, where participants verbally report a virtual object’s location relative to both a vertical and horizontal grid. The method is tested with a small (1.95 × 1.95 × 1.95 cm) virtual object at distances of 50 to 80 cm, viewed through a Microsoft HoloLens 1st generation AR display. Two experiments examine two different virtual object designs, whether turning in a circle between reported object locations disrupts HoloLens tracking, and whether accuracy errors, including a rightward bias and underestimated depth, might be due to systematic errors that are restricted to a particular display. Turning in a circle did not disrupt HoloLens tracking, and testing with a second display did not suggest systematic errors restricted to a particular display. Instead, the experiments are consistent with the hypothesis that, when looking downwards at a horizontal plane, HoloLens 1st generation displays exhibit a systematic rightward perceptual bias. Precision analysis suggests that the method could measure the perceived location of a virtual object within an accuracy of less than 1 mm.

Mirror Mirror on My Phone: Investigating Dimensions of Self-Face Perception Induced by Augmented Reality Filters

Rebecca Fribourg, Trinity College Dublin Etienne Peillard, LabSTICC Rachel McDonnell, Trinity College Dublin

The main use of Augmented Reality (AR) today for the general public is in applications for smartphones. In particular, social network applications allow the use of many AR filters, modifying users’ environment but also their own image. These AR filters are increasingly and frequently being used and can distort in many ways users’ facial traits. Yet, as for today we do not know clearly how users perceive their faces as augmented by these filters. Face perception has been in the center of a consequent core of research, from which studies have highlighted that specific facial traits could be interpreted from different facial features manipulation (e.g. eyes size was found to influence the perception of dominance and trustworthiness). However, while the depicted studies highlighted valuable insights on the link between face features and the perception of human faces, they only tackle the perception of other persons’ faces. Up to this day, it remains unclear how one perceives appeal, personality traits, intelligence and emotion in one’s own face depending on specific facial feature alterations. In this paper, we present a study that aims to evaluate the impact of different filters, modifying several features of the face such as the size or position of the eyes, the shape of the face or the orientation of the eyebrows. These filters are evaluated via a self-evaluation questionnaire, asking the participants about the emotions and moral traits that their distorted face convey. Our results show relative effects between the different filters in line with previous results regarding the perception of others. However, they also reveal specific effects on self-perception, showing, inter alia, that facial deformation decreases participants’ credence towards their image. The findings of this study covering multiple factors allow us to highlight the impact of face deformation on users’ perception but also the specificity related to this use in AR, paving the way for new works focusing on the psychological impact of such filters.

CrowdXR - Pitfalls and Potentials of Experiments with Remote Participants

Jiayan Zhao, The Pennsylvania State University Mark Simpson, The Pennsylvania State University Pejman Sajjadi, The Pennsylvania State University Jan Oliver Wallgrün, The Pennsylvania State University Ping Li, The Hong Kong Polytechnic University Mahda M. Bagher, The Pennsylvania State University Danielle Oprean, University of Missouri Lace Padilla, UC Merced Alexander Klippel, The Pennsylvania State University

Although the COVID-19 pandemic has made the need for remote data collection more apparent than ever, progress has been slow in the virtual reality (VR) research community, and little is known about the quality of the data acquired from crowdsourced participants who own a head-mounted display (HMD), which we call crowdXR. To investigate this problem, we report on a VR spatial cognition experiment that was conducted both in-lab and out-of-lab. The in-lab study was administered as a traditional experiment with undergraduate students and dedicated VR equipment. The out-of-lab study was carried out remotely by recruiting HMD owners from VR-related research mailing lists, VR subreddits in Reddit, and crowdsourcing platforms. Demographic comparisons show that our out-of-lab sample was older, included more males, and had a higher sense of direction than our in-lab sample. The results of the involved spatial memory tasks indicate that the reliability of the data from out-of-lab participants was as good as or better than their in-lab counterparts. Additionally, the data for testing our research hypotheses were comparable between in- and out-of-lab studies. We conclude that crowdsourcing is a feasible and effective alternative to the use of university participant pools for collecting survey and performance data for VR research, despite potential design issues that may affect the generalizability of study results. We discuss the implications and future directions of running VR studies outside the laboratory and provide a set of practical recommendations.

SceneAR: Scene-based Micro Narratives for Sharing and Remixing in Augmented Reality

Mengyu Chen, University of California Santa Barbara Andrés Monroy-Hernández, Snap Inc. Misha Sra, University of California Santa Barbara

Short-form digital storytelling has become a popular medium for millions of people to express themselves. Traditionally, this medium uses primarily 2D media such as text (e.g., memes), images (e.g., Instagram), gifs (e.g., Giphy), and videos (e.g., TikTok, Snapchat). To expand the modalities from 2D to 3D media, we present SceneAR, a smartphone application for creating sequential scene-based micro narratives in augmented reality (AR). What sets SceneAR apart from prior work is the ability to share the scene-based stories as AR content—no longer limited to sharing images or videos, these narratives can now be experienced in people’s own physical environments. Additionally, SceneAR affords users the ability to remix AR, empowering them to build-upon others’ creations collectively. We asked 18 people to use SceneAR in a 3-day study. Based on user interviews, analysis of screen recordings, and the stories they created, we extracted three themes. From those themes and the study overall, we derived six strategies for designers interested in supporting short-form AR narratives.

Paper Session 13: Rendering

Thursday, 7 October 9:30 CEST UTC+2 Track A

Session Chair: Itaru Kitahara YouTube Stream (non-interactive) Discord Channel for Zoom link and Interactive Q&A Access (registered attendees only) :  Browser , App Post-Session Discussion with Authors in Gathertown Room: Q&A Track A

Reconstructing Reflection Maps using a Stacked-CNN for Mixed Reality Rendering

Andrew Chalmers, Victoria University of Wellington Junhong Zhao, Victoria University of Wellington Daniel Medeiros, Victoria University of Wellington Taehyun Rhee, Victoria University of Wellington

Invited TVCG Paper

Corresponding lighting and reflectance between real and virtual objects is important for spatial presence in augmented and mixed reality (AR and MR) applications. We present a method to reconstruct real-world environmental lighting, encoded as a reflection map (RM), from a conventional photograph. To achieve this, we propose a stacked convolutional neural network (SCNN) that predicts high dynamic range (HDR) 360  ∘  RMs with varying roughness from a limited field of view, low dynamic range photograph. The SCNN is progressively trained from high to low roughness to predict RMs at varying roughness levels, where each roughness level corresponds to a virtual object’s roughness (from diffuse to glossy) for rendering. The predicted RM provides high-fidelity rendering of virtual objects to match with the background photograph. We illustrate the use of our method with indoor and outdoor scenes trained on separate indoor/outdoor SCNNs showing plausible rendering and composition of virtual objects in AR/MR. We show that our method has improved quality over previous methods with a comparative user study and error metrics.

Adaptive Light Estimation using Dynamic Filtering for Diverse Lighting Conditions

Junhong Zhao, Victoria University of Wellington Andrew Chalmers, Victoria University of Wellington Taehyun Rhee, Victoria University of Wellington

High dynamic range (HDR) panoramic environment maps are widely used to illuminate virtual objects to blend with real-world scenes. However, in common applications for augmented and mixed-reality (AR/MR), capturing 360-degree surroundings to obtain an HDR environment map is often not possible using consumer-level devices. We present a novel light estimation method to predict 360-degree HDR environment maps from a single photograph with a limited field-of-view (FOV). We introduce the Dynamic Lighting network (DLNet), a convolutional neural network that dynamically generates the convolution filters based on the input photograph sample to adaptively learn the lighting cues within each photograph. We propose novel Spherical Multi-Scale Dynamic (SMD) convolutional modules to dynamically generate sample-specific kernels for decoding features in the spherical domain to predict 360-degree environment maps.

Using DLNet and data augmentations with respect to FOV, an exposure multiplier, and color temperature, our model shows the capability of estimating lighting under diverse input variations. Compared with prior work that fixes the network filters once trained, our method maintains lighting consistency across different exposure multipliers and color temperature, and maintains robust light estimation accuracy as FOV increases. The surrounding lighting information estimated by our method ensures coherent illumination of 3D objects blended with the input photograph, enabling high fidelity augmented and mixed reality supporting a wide range of environmental lighting conditions and device sensors.

Neural Cameras: Learning Camera Characteristics for Coherent Mixed Reality Rendering

David Mandl, Graz University of Technology Peter Mohr, VRVis Research Center Tobias Langlotz, University of Otago Christoph Ebner, Graz University of Technology Shohei Mori, Graz University of Technology Stefanie Zollmann, University of Otago Peter Roth, Technical University of Munich Denis Kalkofen, Graz University of Technology

Coherent rendering is important for generating plausible Mixed Reality presentations of virtual objects within a user’s real-world environment. Besides photo-realistic rendering and correct lighting, visual coherence requires simulating the imaging system that is used to capture the real environment. While existing approaches either focus on a specific camera or a specific component of the imaging system, we introduce Neural Cameras, the first approach that jointly simulates all major components of an arbitrary modern camera using neural networks. Our system allows for adding new cameras to the framework by learning the visual properties from a database of images that has been captured using the physical camera. We present qualitative and quantitative results and discuss future direction for research that emerge from using Neural Cameras.

Selective Foveated Ray Tracing for Head-Mounted Displays

Youngwook Kim, Sogang University Yunmin Ko, Snow Corp. Insung Ihm, Sogang University

Although ray tracing produces significantly more realistic images than traditional rasterization techniques, it is still considered computationally burdensome when implemented on a head-mounted display (HMD) system that demands both wide field of view and high rendering rate. A further challenge is that to present high-quality images on an HMD screen, a sufficient number of ray samples should be taken per pixel for effective antialiasing to reduce visually annoying artifacts. In this paper, we present a novel foveated real-time rendering framework that realizes classic Whitted-style ray tracing on an HMD system. In particular, our method proposes combining the selective supersampling technique by Jin et al.[8] with the foveated rendering scheme, resulting in perceptually highly efficient pixel sampling suitable for HMD ray tracing. We demonstrate that further enhanced by foveated temporal antialiasing, our ray tracer renders nontrivial 3D scenes in real time on commodity GPUs at high sampling rates as effective as up to 36 samples per pixel (spp) in the foveal area, gradually reducing to at least 1 spp in the periphery.

Foveated Photon Mapping

Xuehuai Shi, Beihang University Lili Wang, Beihang University Xiaoheng Wei, Beihang University Ling-Qi Yan, University of California, Santa Barbara

Virtual reality (VR) applications require high-performance rendering algorithms to efficiently render 3D scenes on the VR head-mounted display, to provide users with an immersive and interactive virtual environment. Foveated rendering provides a solution to improve the performance of rendering algorithms by allocating computing resources to different regions based on the human visual acuity, and renders images of different qualities in different regions. Rasterization-based methods and ray tracing methods can be directly applied to foveated rendering, but rasterization-based methods are difficult to estimate global illumination (GI), and ray tracing methods are inefficient for rendering scenes that contain paths with low probability. Photon mapping is an efficient GI rendering method for scenes with different materials. However, since photon mapping cannot dynamically adjust the rendering quality of GI according to the human acuity, it cannot be directly applied to foveated rendering. In this paper, we propose a foveated photon mapping method to render realistic GI effects in the foveal region. We use the foveated photon tracing method to generate photons with high density in the foveal region, and these photons are used to render high-quality images in the foveal region. We further propose a temporal photon management to select and update the valid foveated photons of the previous frame for improving our method’s performance. Our method can render diffuse, specular, glossy and transparent materials to achieve effects specifically related to GI, such as color bleeding, specular reflection, glossy reflection and caustics. Our method supports dynamic scenes and renders high-quality GI in the foveal region at interactive rates.

Paper Session 14: Perception & Experiences

Thursday, 7 October 9:30 CEST UTC+2 Track B

Session Chair: Etienne Peillard YouTube Stream (non-interactive) Discord Channel for Zoom link and Interactive Q&A Access (registered attendees only) :  Browser , App Post-Session Discussion with Authors in Gathertown Room: Q&A Track B

Investigation of Size Variations in Optical See-through Tangible Augmented Reality

Denise Kahl, German Research Center for Artificial Intelligence Marc Ruble, German Research Center for Artificial Intelligence Antonio Krüger, German Research Center for Artificial Intelligence

Optical see-through AR headsets are becoming increasingly attractive for many applications. Interaction with the virtual content is usually achieved via hand gestures or with controllers. A more seamless interaction between the real and virtual world can be achieved by using tangible objects to manipulate the virtual content. Instead of interacting with detailed physical replicas, working with abstractions allows a single physical object to represent a variety of virtual objects. These abstractions would differ from their virtual representations in shape, size, texture and material. This paper investigates for the first time in optical see-through AR, whether size variations are possible without major losses in performance, usability and immersion. The conducted study shows that size can be varied within a limited range without significantly affecting task completion times as well as feelings of disturbance and presence. Stronger size deviations are possible for physical objects smaller than the virtual object than for larger physical objects.

Virtual extensions improve perception-based instrument alignment using optical see-through devices

Mohamed Benmahdjoub, Erasmus MC Wiro J. Niessen, Erasmus MC Eppo B. Wolvius, Erasmus MC Theo van Walsum, Erasmus MC

Instrument alignment is a common task in various surgical interventions using navigation. The goal of the task is to position and orient an instrument as it has been planned preoperatively.   To this end, surgeons rely on patient-specific data visualized on screens alongside preplanned trajectories. The purpose of this manuscript is to investigate the effect of instrument visualization/non visualization on alignment tasks, and to compare it with virtual extensions approach which augments the realistic representation of the instrument with simple 3D objects. 18 volunteers performed six alignment tasks under each of the following conditions: no visualization on the instrument; realistic visualization of the instrument; realistic visualization extended with virtual elements (Virtual extensions). The first condition represents an egocentric-based alignment while the two other conditions additionally make use of exocentric depth estimation to perform the alignment. The device used was a see-through device (Microsoft HoloLens 2). The positions of the head and the instrument were acquired during the experiment.   Additionally, the users were asked to fill NASA-TLX and SUS forms foreach condition. The results show that instrument visualization is essential for a good alignment using see-through devices. Moreover, virtual extensions helped achieve the best performance compared to the other conditions with medians of 2 mm and 2° positional and angular error respectively. Furthermore, the virtual extensions decreased the average head velocity while similarly reducing the frustration levels. Therefore, making use of virtual extensions could facilitate alignment tasks in augmented and virtual reality (AR/VR) environments, specifically in AR navigated surgical procedures when using optical see-through devices.

Now I’m Not Afraid: Reducing Fear of Missing Out in 360° Videos on a Head-Mounted Display Using a Panoramic Thumbnail

Shoma Yamaguchi, The University of Tokyo Nami Ogawa, DMM.com Takuji Narumi, The University of Tokyo

Cinematic virtual reality, or 360° video, provides viewers with an immersive experience, allowing them to enjoy a video while moving their head to watch in any direction. However, there is an inevitable problem of feeling fear of missing out (FOMO) when viewing a 360° video, as only a part of the video is visible to the viewer at any given time. To solve this problem, we developed a technique to present a panoramic thumbnail of a full 360° video to users through a head-mounted display. With this technique, the user can grasp the overall view of the video as needed. We conducted an experiment to evaluate the FOMO, presence, and quality of viewing experience while using this technique compared to normal viewing without it. The results of the experiment show that the proposed technique relieved FOMO, the quality of viewing experience was improved, and there was no difference in presence. We also investigated how users interacted with this new interface based on eye tracking and head tracking data during viewing, which suggested that users used the panoramic thumbnail to actively explore outside their field of view.

Understanding the Two-Step Nonvisual Omnidirectional Guidance for Target Acquisition in 3D Spaces

Seung A Chung, Ewha Womans University Kyungyeon Lee, Ewha Womans University Uran Oh, Ewha Womans University

Providing directional guidance is important especially for exploring unfamiliar environments. However, most studies are limited to two-dimensional guidance when many interactions happen in 3D spaces. Moreover, visual feedback that is often used to communicate the 3D position of a particular object may not be available in situations when the target is occluded by other objects or located outside of one’s field of view, or due to visual overload or light conditions. Inspired by a prior finding that showed users’ tendency of scanning a 3D space in one direction at a time, we propose two-step nonvisual omnidirectional guidance feedback designs varying the searching order where the guidance for the vertical location of the target (the altitude) is offered to the users first, followed by the horizontal direction of the target (the azimuth angle) and vice versa. To investigate its effect, we conducted the user study with 12 blind-folded sighted participants. Findings suggest that our proposed two-step guidance outperforms the default condition with no order in terms of task completion time and travel distance, particularly when the guidance in the horizontal direction is presented first. We plan to extend this work to assist with finding a target in 3D spaces in a real-world environment.

Investigating Textual Sound Effects in a Virtual Environment and their impacts on Object Perception and Sound Perception

Thibault Fabre, The University of Tokyo Adrien Alexandre Verhulst, Sony Computer Science Laboratories Alfonso Balandra, The University of Tokyo Maki Sugimoto, Keio University Masahiko Inami, The University of Tokyo

In comics, Textual Sound Effects (TE) can describe sounds, but also actions, events, etc. TE could be used in Virtual Environment to efficiently create an easily recognizable scene and add more information to objects at a relatively low design cost. We investigate the impact of TE in a Virtual Environment on objects’ material perception (on category and properties) and on sound perception (on volume [dB] and spatial position). Participants (N=13, repeated measures) categorized metallic and wooden spheres and significantly changed their reaction time depending on the TE congruence with the spheres’ material/sound. They then rated a sphere’s properties (i.e., wetness, warmness, softness, smoothness, and dullness) and significantly changed their rating depending on the TE. When comparing 2 sound volumes, they perceived a sound associated with a shrinking TE as less loud and a sound associated with a growing TE as louder. When locating an audio source location, they located it significantly closer to a TE.

The Impact of Focus and Context Visualization Techniques on Depth Perception in Optical See-Through Head-Mounted Displays

Alejandro Martin-Gomez, Technical University of Munich Jakob Weiss, Technical University of Munich Andreas Keller, Technical University of Munich Ulrich Eck, Technical University of Munich Daniel Roth, Technical University of Munich Nassir Navab, Technical University of Munich

Estimating the depth of virtual content has proven to be a challenging task in Augmented Reality (AR) applications. Existing studies have shown that the visual system uses multiple depth cues to infer the distance of objects, occlusion being one of the most important ones. Generating appropriate occlusions becomes particularly important for AR applications that require the visualization of augmented objects placed below a real surface. Examples of these applications are medical scenarios in which anatomical information needs to be observed within the patients body. In this regard, existing works have proposed several focus and context (F+C) approaches to aid users in visualizing this content using Video See-Through (VST) Head-Mounted Displays (HMDs). However, the implementation of these approaches in Optical See-Through (OST) HMDs remains an open question due to the additive characteristics of the display technology. In this paper, we, for the first time, design and conduct a user study that compares depth estimation between VST and OST HMDs using existing in-situ visualization methods. Our results show that these visualizations cannot be directly transferred to OST displays without increasing error in depth perception tasks. To tackle this gap, we perform a structured decomposition of the visual properties of AR F+C methods to find best-performing combinations. We propose the use of chromatic shadows and hatching approaches transferred from computer graphics. In a second study, we perform a factorized analysis of these combinations, showing that varying the shading type and using colored shadows can lead to better depth estimation when using OST HMDs.

Paper Session 15: Human Factors & Ethics

Thursday, 7 October 11:30 CEST UTC+2 Track A

Session Chair: Manuela Chessa YouTube Stream (non-interactive) Discord Channel for Zoom link and Interactive Q&A Access (registered attendees only) :  Browser , App Post-Session Discussion with Authors in Gathertown Room: Q&A Track A

Safety, Power Imbalances, Ethics and Proxy Sex: Surveying In-The-Wild Interactions Between VR Users and Bystanders

Joseph O’Hagan, University of Glasgow Julie R. Williamson, University of Glasgow Mark McGill, University of Glasgow Mohamad Khamis, University of Glasgow

VR users and bystanders must sometimes interact, but our understanding of these interactions – their purpose, how they are accomplished, attitudes toward them, and where they break down – is limited. This current gap inhibits research into managing or supporting these interactions, and preventing unwanted or abusive activity. We present the results of the first survey (N=100) that investigates stories of actual emergent in-the-wild interactions between VR users and bystanders. Our analysis indicates VR user and bystander interactions can be categorised into one of three categories: coexisting, demoing, and interrupting. We highlight common interaction patterns and impediments encountered during these interactions. Bystanders play an important role in moderating the VR user’s experience, for example intervening to save the VR user from potential harm. However, our stories also suggest that the occlusive nature of VR introduces the potential for bystanders to exploit the vulnerable state of the VR user; and for the VR user to exploit the bystander for enhanced immersion, introducing significant ethical concerns.

Evaluating the User Experience of a Photorealistic Social VR Movie

Jie Li, Centrum Wiskunde & Informatica Shishir Subramanyam, Centrum Wiskunde & Informatica Jack Jansen, Centrum Wiskunde & Informatica Yanni Mei, Centrum Wiskunde & Informatica Ignacio Reimat, Centrum Wiskunde & Informatica Kinga Lawicka, Centrum Wiskunde & Informatica Pablo Cesar, Centrum Wiskunde & Informatica

We all enjoy watching movies together. However, this is not always possible if we live apart. While we can remotely share our screens, the experience differs from being together. We present a social Virtual Reality (VR) system that captures, reconstructs, and transmits multiple users’ volumetric representations into a commercially produced 3D virtual movie, so they have the feeling of “being there” together. We conducted a 48-user experiment where we invited users to experience the virtual movie either using a Head Mounted Display (HMD) or using a 2D screen with a game controller. In addition, we invited 14 VR experts to experience both the HMD and the screen version of the movie and discussed their experiences in two focus groups. Our results showed that both end-users and VR experts found that the way they navigated and interacted inside a 3D virtual movie was novel. They also found that the photorealistic volumetric representations enhanced feelings of co-presence. Our study lays the groundwork for future interactive and immersive VR movie co-watching experiences.

Directions for 3D User Interface Research from Consumer VR Games

Anthony Steed, University College London Tuukka M. Takala, Waseda University Dan Archer, University College London Wallace Lages, Virginia Tech Robert W. Lindeman, University of Canterbury

With the continuing development of affordable immersive virtual reality (VR) systems, there is now a growing market for consumer content. The current form of consumer systems is not dissimilar to the lab-based VR systems of the past 30 years: the primary input mechanism is a head-tracked display and one or two tracked hands with buttons and joysticks on hand-held controllers. Over those 30 years, a very diverse academic literature has emerged that covers design and ergonomics of 3D user interfaces (3DUIs). However, the growing consumer market has engaged a very broad range of creatives that have built a very diverse set of designs. Sometimes these designs adopt findings from the academic literature, but other times they experiment with completely novel or counter-intuitive mechanisms. In this paper and its online adjunct, we report on novel 3DUI design patterns that are interesting from both design and research perspectives: they are highly novel, potentially broadly re-usable and/or suggest interesting avenues for evaluation. The supplemental material, which is a living document, is a crowd-sourced repository of interesting patterns. This paper is a curated snapshot of those patterns that were considered to be the most fruitful for further elaboration.

Using Trajectory Compression Rate to Predict Changes in Cybersickness in Virtual Reality Games

Diego Vilela Monteiro, Xi’an Jiaotong-Liverpool University Hai-Ning Liang, Xi’an Jiaotong-Liverpool University Xiaohang Tang, Xi’an Jiaotong-Liverpool University Pourang Irani, University of Manitoba

Identifying cybersickness in virtual reality (VR) applications such as games in a fast, precise, non-intrusive, and non-disruptive way remains challenging. Several factors can cause cybersickness, and their identification will help find its origins and prevent or minimize it. One such factor is virtual movement. Movement, whether physical or virtual, can be represented in different forms. One way to represent and store it is with a temporally annotated point sequence. Because a sequence is memory-consuming, it is often preferable to save it in a compressed form. Compression allows redundant data to be eliminated while still preserving changes in speed and direction. Since changes in direction and velocity in VR can be associated with cybersickness, changes in compression rate can likely indicate changes in cybersickness levels. In this research, we explore whether quantifying changes in virtual movement can be used to estimate variation in cybersickness levels of VR users. We investigate the correlation between changes in the compression rate of movement data in two VR games with changes in players’ cybersickness levels captured during gameplay. Our results show (1) a clear correlation between changes in compression rate and cybersickness, and (2) that a machine learning approach can be used to identify these changes. Finally, results from a second experiment show that our approach is feasible for cybersickness inference in games and other VR applications that involve movement.

A Partially-Sorted Concentric Layout for Efficient Label Localization in Augmented Reality

Zijing Zhou, Beihang University Lili Wang, Beihang University Voicu Popescu, Purdue University

A common approach for Augmented Reality labeling is to display the label text on a flag planted into the real world element at a 3D anchor point. When there are more than just a few labels, the efficiency of the interface decreases as the user has to search for a given label sequentially. The search can be accelerated by sorting the labels alphabetically, but sorting all labels results in long and intersecting leader lines from the anchor points to the labels. This paper proposes a partially-sorted concentric label layout that leverages the search efficiency of sorting while avoiding the label display problems of long or intersecting leader lines. The labels are partitioned into a small number of sorted sequences displayed on circles of increasing radii. Since the labels on a circle are sorted, the user can quickly search each circle. A tight upper bound derived from circular permutation theory limits the number of circles and thereby the complexity of the label layout. For example, 12 labels require at most three circles. When the application allows it, the labels are presorted to further reduce the number of circles in the layout. The layout was tested in a user study where it significantly reduced the label searching time compared to a conventional single-circle layout.

Paper Session 16: 3D Manipulation

Thursday, 7 October 11:30 CEST UTC+2 Track B

Session Chair: Hartmut Seichter YouTube Stream (non-interactive) Discord Channel for Zoom link and Interactive Q&A Access (registered attendees only) :  Browser , App Post-Session Discussion with Authors in Gathertown Room: Q&A Track B

Gaze Comes in Handy: Predicting and Preventing Erroneous Hand Actions in AR-Supported Manual Tasks

Julian Wolf, ETH Zürich Quentin Lohmeyer, ETH Zürich Christian Holz, ETH Zürich Mirko Meboldt, ETH Zürich

Emerging Augmented Reality headsets incorporate gaze and hand tracking and can, thus, observe the user’s behavior without interfering with ongoing activities. In this paper, we analyze hand-eye coordination in real-time to predict hand actions during target selection and warn users of potential errors before they occur. In our first user study, we recorded 10 participants playing a memory card game, which involves frequent hand-eye coordination with little task-relevant information. We found that participants’ gaze locked onto target cards 350 ms before the hands touched them in 73.3% of all cases, which coincided with the peak velocity of the hand moving to the target. Based on our findings, we then introduce a closed-loop support system that monitors the user’s fingertip position to detect the first card turn and analyzes gaze, hand velocity and trajectory to predict the second card before it is turned by the user. In a second study with 12 participants, our support system correctly displayed color-coded visual alerts in a timely manner with an accuracy of 85.9%. The results indicate the high value of eye and hand tracking features for behavior prediction and provide a first step towards predictive real-time user support.

Evaluation of Drop Shadows for Virtual Object Grasping in Augmented Reality

Muadh Al-Kalbani, Birmingham City University Maite Frutos-Pascual, Birmingham City University Ian Williams, Birmingham City University

Invited CG&A Paper

This article presents the use of rendered visual cues as drop shadows and their impact on overall usability and accuracy of grasping interactions for monitor-based exocentric augmented reality (AR). We report on two conditions, grasping with drop shadows and without drop shadows, and analyze a total of 1620 grasps of two virtual object types (cubes and spheres). We report on the accuracy of one grasp type, the Medium Wrap grasp, against Grasp Aperture (GAp), Grasp Displacement (GDisp), completion time, and usability metrics from 30 participants. A comprehensive statistical analysis of the results is presented giving comparisons of the inclusion of drop shadows in AR grasping. Findings showed that the use of drop shadows increases usability of AR grasping while significantly decreasing task completion times. Furthermore, drop shadows also significantly improve user’s depth estimation of AR object position. However, this study also shows that using drop shadows does not improve user’s object size estimation, which remains as a problematic element in grasping AR interaction literature.

Fine Virtual Manipulation with Hands of Different Sizes

Suzanne Sorli, Universidad Rey Juan Carlos Dan Casas, Universidad Rey Juan Carlos Mickeal Verschoor, Universidad Rey Juan Carlos Ana Tajadura-Jiménez, University College London Miguel Otaduy, Universidad Rey Juan Carlos

Natural interaction with virtual objects relies on two major technology components: hand tracking and hand-object physics simulation. There are functional solutions for these two components, but their hand representations may differ in size and skeletal morphology, hence making the connection non-trivial. In this paper, we introduce a pose retargeting strategy to connect the tracked and simulated hand representations, and we have formulated and solved this hand retargeting as an optimization problem. We have also carried out a user study that demonstrates the effectiveness of our approach to enable fine manipulations that are slow and awkward with naïve approaches.

VR Collaborative Object Manipulation Based on View Quality

Lili Wang, Beihang University Xiaolong Liu, Beihang University Xiangyu Li, Beihang University

We introduce a collaborative manipulation method to improve the efficiency and accuracy of object manipulation in virtual reality applications with multiple users. When multiple users manipulate an object in collaboration, some users have a better viewpoint than other users at a certain moment, can clearly observe the object to be manipulated and the target position, and it is more efficient and accurate for him to manipulate the object. We construct a viewpoint quality function, and evaluate the viewpoints of multiple users by calculating its three components: the visibility of the object needs to be manipulated, the visibility of the target, the depth and distance combined of the target. By comparing the viewpoint quality of multiple users, the user with the highest viewpoint quality is determined as the dominant manipulator, who can manipulate the object at the moment. A temporal filter is proposed to filter the dominant sequence generated by the previous frames and the current frame, which reduces the dominant manipulator jumping back and forth between multiple users in a short time slice, making the determination of the dominant manipulator more stable. We have designed a user study and tested our method with three multi-user collaborative manipulation tasks. Compared to the two traditional dominant manipulator determination methods: first come first action and actively switch dominance, our method showed significant improvement in manipulation task completion time, rotation accuracy. Moreover, our method balances the participation time of users and reduces the task load significantly.

Separation, Composition, or Hybrid? – Comparing Collaborative 3D Object Manipulation Techniques for Handheld Augmented Reality

Jonathan Wieland,  University of Konstanz Johannes Zagermann, University of Konstanz Jens Müller, University of Konstanz Harald Reiterer, University of Konstanz

Augmented Reality (AR) supported collaboration is a popular topic in HCI research. Previous work has shown the benefits of collaborative 3D object manipulation and identified two possibilities: Either separate or compose users’ inputs. However, their experimental comparison using handheld AR displays is still missing. We, therefore, conducted an experiment in which we tasked 24 dyads with collaboratively positioning virtual objects in handheld AR using three manipulation techniques: 1) Separation – performing only different manipulation tasks (i. e., translation or rotation) simultaneously, 2) Composition – performing only the same manipulation tasks simultaneously and combining individual inputs using a merge policy, and 3) Hybrid – performing any manipulation tasks simultaneously, enabling dynamic transitions between Separation and Composition. While all techniques were similarly effective, Composition was least efficient, with higher subjective workload and worse user experience. Preferences were polarized between clear work division (Separation) and freedom of action (Hybrid). Based on our findings, we offer research and design implications.

Exploring Head-based Mode-Switching in Virtual Reality

Rongkai Shi, Xi’an Jiaotong-Liverpool University Nan Zhu, Xi’an Jiaotong-Liverpool University Hai-Ning Liang, Xi’an Jiaotong-Liverpool University Shengdong Zhao, National University of Singapore

Mode-switching supports multilevel operations using a limited number of input methods. In Virtual Reality (VR) head-mounted displays (HMD), common approaches for mode-switching use buttons, controllers, and users’ hands. However, they are inefficient and challenging to do with tasks that require both hands (e.g., when users need to use two hands during drawing operations). Using head gestures for mode-switching can be an efficient and cost-effective way, allowing for a more continuous and smooth transition between modes. In this paper, we explore the use of head gestures for mode-switching especially in scenarios when both users’ hands are performing tasks. We present a first user study that evaluated eight head gestures that could be suitable for VR HMD with a dual-hand line-drawing task. Results show that move forward, move backward, roll left, and roll right led to better performance and are preferred by participants. A second study integrating these four gestures in Tilt Brush, an open-source painting VR application, is conducted to further explore the applicability of these gestures and derive insights. Results show that Tilt Brush with head gestures allowed users to change modes with ease and led to improved interaction and user experience. The paper ends with a discussion on some design recommendations for using head-based mode-switching in VR HMD.

sponsor-01

SME and Media partner

sponsor-03

© 2021 by ISMAR

Sponsored by the IEEE Computer Society Visualization and Graphics Technical Committee and ACM SIGGRAPH

  • E & C ENGG
  • JAVA PROGRAMS
  • PHP PROGRAMS
  • ARTIFICIAL INTELLIGENCE
  • CLOUD COMPUTING
  • WIRELESS TECHNOLOGY

Latest Technical Paper Presentation Topics

  • by Ravi Bandakkanavar
  • April 14, 2024

This post contains a wide variety of technical papers chosen from various Engineering streams. The latest Technical Paper Presentation Topics include trending topics from emerging Technology like Artificial Intelligence, Machine Learning, 5G Technology, Cybersecurity, and Cloud Computing. It also includes topics from different Engineering streams like Computer Science and Engineering, Electronics Communications and Engineering, Electrical and Electronic Engineering, Mechanical Engineering, and Automobile Engineering. 

  • Blockchain Technology
  • Chat GPT and its capabilities
  • How 5G Technology can Revolutionize the Industry?
  • 5G Wireless Technology
  • Impact of the Internet on Our Daily Life
  • The technology  behind Face Unlocking in Smartphones
  • 3D Printing Technology
  • Anti-HIV using nanorobots
  • Humanoid Robots
  • Virtual Reality: working and examples
  • Metaverse and how Apps are developed in Metaverse
  • Smart Eye Technology
  • Augmented Reality
  • Automatic Video Surveillance Systems
  • Automatic number plate recognition
  • Cloud Computing vs. Distributed Computing
  • Importance of Cloud Computing to Solve Analytical Workloads
  • Attendance Monitoring Intelligent Classroom
  • Automatic Mobile Recharger Station
  • Automatic sound-based user grouping for real-time online forums
  • Bio-computers/Biomolecular Computers
  • What is Big Data?
  • Biomedical instrumentation and signal analysis

Latest Technical Paper Presentation Topics

  • Artificial intelligence and the impact of AI on our lives
  • Is Artificial Intelligence a Threat or a Benefit?
  • Top 10 Ways Artificial Intelligence Future will Change the World
  • Artificial Intelligence: Technology that Hosts Race between Enterprises
  • The Role of Artificial Intelligence in the Healthcare Industry
  • How AI Technology Can Help You Optimize Your Marketing
  • Narrow AI vs General AI: Understanding The Key Differences
  • Future Of Industrial Robotics With AI
  • Causes of CyberCrime and Preventive Measures
  • What is Phishing? How to tackle Phishing Attacks?
  • What is the Dark Web? How to Protect yourself from the Dark Web?
  • Cyberbullying: The emerging crime of 21 st Century
  • Anatomy and working of search engines
  • Bionic Eye – a possible path toward the Artificial retina
  • Bluetooth-based Smart Sensor Networks
  • Broadband access via satellite
  • Brain-computer interface
  • Blue eyes technology
  • Brain-controlled car for the disabled using artificial intelligence
  • Brain Port device
  • Brain Finger Print Technology
  • BrainGate Technology
  • Digital jewelry
  • Development of an Intelligent Fire Sprinkler System
  • Capturing packets in secured networks
  • Digital Speech Effects Synthesizer
  • Aqua communication using a modem
  • Serverless Edge Computing
  • Intrusion detection system
  • How to prepare for a Ransomware attack?
  • What is the Dark Web? How to Protect Your Kids from the Dark Web?
Artificial Intelligence Topics for Presentation
  • Carbon nanotubes
  • Cloud computing
  • Mobile Ad hoc Networks  (MANETs)
  • Narrow AI vs General AI
  • Security aspects in mobile ad hoc networks  (MANETs)
  • Mobile Ad Hoc Network Routing Protocols and applications
  • Graphical Password Authentication
  • GSM-based Advanced Wireless Earthquake Alarm System for early warning
  • Computerized Paper Evaluation using Neural Network
  • Deploying a wireless sensor network on an active volcano
  • Data Mining and Predictive Analytics
  • Understanding Data Science and Data-Driven Businesses
  • Dynamic Car Parking Negotiation and Guidance Using an Agent-based platform
  • Real-Time Street Light Control Systems
  • Data Security in Local Networks using Distributed Firewalls
  • Design of a wireless sensor board for measuring air pollution
  • Design of diamond-based Photonics devices
  • Design of Low-Density Parity-Check Codes
  • What is LiDAR Technology?
  • Tizen Operating System – One OS for everything
  • Authentication using Biometric Technology
  • Speech Recognition
  • The working of Self-Driving Vehicles
  • Speech Processing
  • Digit recognition using a neural network
  • Digital Audio Effects Control by Accelerometry
  • Digital Camera Calibration and Inversion for Stereo iCinema
  • Dynamic resource allocation in Grid Computing
  • Dynamic Virtual Private Network
  • Earth Simulator – Fastest Supercomputer
  • Electromagnetic Applications for Mobile and Satellite Communications
  • Electronic nose & its application
  • Elliptical Curve Cryptography (ECC)
  • Cryptocurrency Wallet – is it the Future of Blockchain Technology
  • Reactive Power Consumption in Transmission Line
  • SPINS – Security Protocol For Sensor Network
  • Smart Bandage Technology
  • Embedded web server for remote access
  • Encrypted Text chat Using Bluetooth
  • Electronic toll collection
  • Electronic waste (e-waste)
  • Apache Hadoop Introduction
  • Embedded web server for industrial automation
  • Eyegaze system
  • Fuel saver system
  • Guarding distribution automation system against cyber attacks
  • Face detection technology
  • Falls detection using accelerometry and barometric pressure
  • Fast Convergence algorithms for Active Noise Controlling Vehicles
  • Fault-tolerant Routing in Mobile ad-hoc network
  • Ferroelectric RAM
  • Fingerprint recognition system by neural networks
Technical Paper Topics on CyberSecurity
  • Flexible CRT Displays
  • Fluorescent Multilayer Disc (FMD)
  • Fluorescent Multilayer Optical Data Storage
  • Forecasting Wind Power
  • Fractal image compression
  • Fractal robots
  • Geometric Invariants in Biological Molecules
  • Global positioning response system
  • Broadband over power line
  • Card-based security system
  • Face Recognition Technology
  • GSM Digital Security Systems for Printer
  • Groupware Technology
  • Indian Regional Navigation Satellite System
  • GSM Security And Encryption
  • Hardware implementation of background image modeling
  • HAVI: Home Audio Video Interoperability
  • Hawk Eye – A technology in sports
  • High Altitude Aeronautical Platforms
  • High-Performance Clusters
  • High-Performance DSP Architectures
  • High-speed circuits for optical interconnect
  • High-speed LANs or the Internet
  • Holographic Data Storage
  • Holographic Memory
  • Holographic Versatile Disc
  • Holt-Winters technique for Financial Forecasting
  • HomeRF and Bluetooth: A wireless data communications revolution
  • How does the Internet work?
  • Hyper Transport Technology
  • How does a search engine work ?
  • How does google search engine work ?
  • Human-computer interaction & its future
  • Design of a color Sensing System for Textile Industries
  • GSM-based Path Planning for Blind Persons Using Ultrasonic
  • Imbricate cryptography
  • Implementation of hamming code
  • Implementation of QUEUE
  • Image transmission over WiMAX Systems
  • Implantable on-chip Power Supplies
  • Integrating Wind Power into the Electricity grid
  • Integration of wind and solar energy in smart mini-grid
  • Intelligent navigation system
  • Intelligent Patient Monitoring System
  • Intelligent RAM: IRAM
  • Intelligent Software Agents
  • Interactive Voice Response System
  • Internet architecture and routing
  • Internet Protocol duplicate address detection and adaptation
  • Investigation of the real-time implementation of learning controllers
  • IP spoofing
  • IP redirector features
  • iSCSI: The future of Network Storage
  • ISO Loop magnetic couplers
  • Jamming and anti-Jamming Techniques
  • Light-emitting polymers
  • Load balancing and Fault-tolerant servers
  • Light Interception Image Analysis
  • Lightning Protection Using LFAM
  • Liquid Crystal on Silicon Display (LCOS)
  • Location estimation and trajectory prediction for PCS networks
  • Low-Power Microelectronics for Biomedical Implants
  • Low-Power Oscillator for Implants
  • Magnetic Random Access Memory
  • Managing Data In Multimedia Conferencing
  • Microchip production using extreme UV lithography
  • Modeling of wind turbine system for an Interior Permanent magnet generator
  • Moletronics – an invisible technology
  • Power generation through Thermoelectric generators
  • Multi-Protocol Label Switching
  • Multiuser Scheduling for MIMO broadcasting
  • Multisensor Fusion and Integration
  • Parasitic computing
  • Password paradigms
  • Polymer memory – a new way of using plastic as secondary storage
  • Programmable logic devices (PLD)
  • Non-Volatile Static RAM
  • Optical coherence tomography
  • Open source technology
  • Ovonic unified memory
  • Personal satellite assistant systems
  • PH control technique using fuzzy logic
  • Pluggable Authentication Modules (PAM)
  • Power Efficiency and Security in Smart Homes
  • Proactive Anomaly Detection
  • Prototype System Design for Telemedicine
  • QoS in Cellular Networks Based on MPT
  • Quad-Core Processors
  • Real-Time Operating Systems on Embedded ICs
  • Real-Time Speech Translation
  • Real-Time Systems with Linux/RTAI
  • Reliable and Fault-Tolerant Routing on Mobile Ad Hoc Network
  • Robotic Surgery
  • Vehicle monitoring and security system
  • Space-time adaptive processing
  • Radiofrequency identification (RFID) technology
  • Rapid prototyping
Paper Presentation Topics for Computer Science Engineering
  • Secured web portal for online shopping
  • Securing underwater wireless communication networks
  • Security analysis of the micropayment system
  • Security requirements in wireless sensor networks
  • Semantic web
  • Sensitive skin
  • Snake robot the future of agile motion
  • Software-Defined Radio (SDR)
  • Importance of Software-Defined Wide-Area Networks
  • SPWM(sinusoidal pulse width modulation) technique for multilevel inverter
  • Switchgrass
  • Solar Powered Speakers
  • Security on Wireless LAN Adaptive cruise control
  • Session Initiation Protocol (SIP)
  • Shallow water Acoustic Networks
  • Significance of real-time transport Protocol in VOIP
  • Simulating Quantum Cryptography
  • Single photon emission computed tomography
  • Smart cameras for traffic surveillance
  • Smart Fabrics
  • Space Mouse
  • Space Robotics
  • Speech Enhancement for Cochlear Implants
  • Speed Detection of moving vehicles using speed cameras
  • Swarm intelligence & traffic safety
  • Synthetic Aperture Radar System
  • Systems Control for Tactical Missile Guidance
  • The Architecture of a Moletronics Computer
  • The Evolution of Digital Marketing
  • Thermal infrared imaging technology
  • Thought Translation Device (TTD)
  • Three-dimensional password for more secure authentication
  • Ultrasonic motor
  • Wearable biosensors
  • Traffic Light Control System
  • Wireless integrated network sensors
  • Ultrasonic detector for monitoring partial discharge
  • Ultra-Wideband Communication
  • What is IPaaS? Trending IPaaS Services Available In the Market
  • Wireless Computer Communications Using Sound Waves
  • Click to share on Twitter (Opens in new window)
  • Click to share on Facebook (Opens in new window)
  • Click to share on LinkedIn (Opens in new window)
  • Click to share on Pinterest (Opens in new window)
  • Click to share on Pocket (Opens in new window)

213 thoughts on “Latest Technical Paper Presentation Topics”

' src=

Hello sir! Hope you are doing well. I have a technical paper presentation this semester, so I would like some suggestions in the domain of HCI, AI- ML, and Data science. Thank you sir.

' src=

Hello sir! , Can you help me on what kind of application that are very useful in the present?

' src=

Are you looking for mobile applications or web applications? Automating the manual processes will add more value.

work automation (can be delivery, operations, movement, robotics, AI/ML etc) Traffic control systems Communication/Data transfer VR/AR

Hi sir! Can you help me what can feature can i add in Log In System for Covid 19 . Thankyousmuch sir❤️

If you are looking for a Covid application for the information purpose, it may include the following things: 1. Covid statistics (country/state/city/daily/weekly/monthly wise) 2. Individuals health history 3. Vaccination status 4. Hospitals and health centers information

You can add many more things like health hygiene shops, tourism etc.

' src=

need some technical topic related to ECE

Did it help? Would you like to express? Cancel reply

  • Presentations
  • Most Recent
  • Infographics
  • Data Visualizations
  • Forms and Surveys
  • Video & Animation
  • Case Studies
  • Design for Business
  • Digital Marketing
  • Design Inspiration
  • Visual Thinking
  • Product Updates
  • Visme Webinars
  • Artificial Intelligence

85+ Best Free Presentation Templates to Edit & Download

85+ Best Free Presentation Templates to Edit & Download

Written by: Mahnoor Sheikh

best presentation templates - header wide

Looking for the best presentation templates to use for your next pitch deck , company meeting or training session ? You’re in the right place.

Creating a good presentation from scratch can be frustrating. Especially if you want to stand out and look professional, but don’t have a lot of time on your hands.

Thankfully, this is why top online presentation templates and slide themes outside of PowerPoint and Google Slides exist.

Scroll down for some of the best presentation templates in Visme across various categories. When you find one you like, click on the button below it to start editing it using the presentation software .

Visme's presentation software has 400+ pre-made presentation templates and 1,500+ slide templates created by professional designers. All of our slideshows are fully customizable, so you can fit them to your brand easily using our intuitive Brand Wizard .

Whether you’re looking for a business presentation template , a nonprofit slideshow or an educational presentation for school , you’ll find exactly what you need.

Watch this video to see how easy it is to create a presentation with Visme.

Here's a short selection of 8 easy-to-edit presentation templates you can edit, share and download with Visme. View 72 more templates below:

paper presentation 2021

Best Presentation Templates for Non-Designers

  • Category #1: Best Presentation Templates for Business
  • Category #2: Best Presentation Templates for Training & Education
  • Category #3: Best Presentation Templates for Nonprofit

Best Presentation Templates for Business

In this section, we have compiled a list of the best presentation templates for all kinds of business purposes, such as annual reports, researches, investor pitches and even brand guidelines.

Scroll down to view our top picks for powerful business presentation templates or click through this navigable menu. You’ll discover plenty of creative PowerPoint templates, free downloads and designs.

  • Marketing Report Presentation
  • Project Status Report Presentation
  • Customer Service Presentation
  • Hiring Trends in the Fintech Sector Presentation
  • Employee Onboarding Presentation
  • Meeting Agenda Presentation
  • Sales Report Presentation
  • Press Release Presentation
  • Remote Team Working Agreement Presentation Template
  • Product Presentation
  • Market Analysis Presentation
  • Business Annual Report Presentation
  • Creative Product Presentation
  • Minimalist Fashion Design Presentation
  • Business Plan Presentation
  • Marketing Plan Presentation
  • SWOT Analysis Presentation
  • Best Workout Apps Presentation
  • Architecture Studio Presentation
  • Financial Report Presentation
  • Digital Marketing KPIs Presentation
  • Technology Research Presentation
  • Nature Background Presentation
  • Travel Presentation
  • Consulting Presentation
  • Business Case Study Presentation
  • Wedding Photography Presentation
  • Investor Pitch Deck
  • Mobile App Pitch Deck
  • CRM Go-To-Market Strategy Presentation
  • Online Marketing Webinar Presentation
  • Cab Service Pitch Deck
  • SaaS Pitch Deck
  • Social Media Pitch Deck
  • Influencer Marketing Pitch Deck
  • Visual Brand Identity Presentation
  • Professional Soccer Team Sponsorship Presentation
  • Corporate Sales Operational Report Presentation
  • Ecommerce Business Model Presentation
  • Company Win-Loss Analysis Report Presentation
  • LittleBlue Brand Guidelines Presentation
  • PixelGo Brand Guidelines Presentation
  • Talkie Brand Guidelines Presentation
  • HanaEatery Brand Guidelines Presentation
  • Atmoluxe Brand Guidelines Presentation
  • Creative Brief Presentation
  • Project Management Presentation
  • UX Strategy Presentation
  • Web Development Proposal Presentation
  • Human Resources Presentation
  • Team Project Update Presentation

1. Marketing Report Presentation

This monthly marketing report presentation template is a great way to present the results of your marketing efforts, such as your social media strategy . It features interactive slides, a clean design with icons and section dividers, modern fonts and a bold color scheme that you can replace with your own brand colors.

paper presentation 2021

2. Project Status Report Presentation

If you’re looking through presentation templates for projects with which you can update your boss, colleagues or top management, this is the best one to get started with. It features a classy color scheme with plenty of charts, graphs and data widgets to help explain your project visually.

paper presentation 2021

3. Customer Service Presentation

This presentation template is ideal for those involved in customer service. You can present all kinds of statistics and figures using this bold and edgy presentation template. It features nice, clean slides with large fonts, creative data widgets to visualize statistics and even a bar graph  you can customize.

paper presentation 2021

4. Hiring Trends in the Fintech Sector Presentation

This striking presentation template is sure to grab your audience’s attention. It features a futuristic design with modern fonts , popping colors against a dark background, social media icons and a clean layout with numbers to fit any type of industry or purpose.

paper presentation 2021

If you're struggling to find the right words or you're short on time to add text to your presentation slides , try Visme's AI text generator . With a simple prompt, you'll be provided with a copy for drafts, ideas, structures, outlines , and overviews. You can also proofread and edit existing text. It's quick and easy to use

5. Employee Onboarding Presentation

This onboarding presentation template is a great pick for HR teams who want to educate new employees about the company. With over 15 ready-to-use slides, this template uses a creative slide design–a black-and-white color scheme with a splash of bold color. Use it as is, or customize the colors to fit your company's brand identity .

paper presentation 2021

If you're running out of time or creative fuel, use Visme’s AI Presentation Maker. Generate ready-to-use presentations with a single prompt in a matter of minutes. Click here to try Visme’s free AI presentation maker today.

6. Meeting Agenda Presentation

This robust company meeting presentation template consists of 15 well-designed slides. It has everything you need to present your meeting agenda, from Gantt charts and checklists to an appealing project timeline. Mix and match to communicate every single detail with ease.

paper presentation 2021

7. Sales Report Presentation

The perfect sales report does exist! This sales presentation template is colorful, upbeat and just right for showing off those strong numbers to your boss or management. It consists of 9 professional slides with data visualizations , bold fonts and a corporate look and feel.

paper presentation 2021

You can supercharge your presentation by tapping into Visme’s integration with your favorite data-driven apps like Tableau, Google Sheets, HubSpot, Salesforce, and more. Import data directly into your charts and graphs to easily keep your presentation charts updated as your sales data changes or grows.

8. Press Release Presentation

This press release presentation template is sleek and polished. It's just what you need to present company news and information to the media, potential investors, customers or the general public while maintaining your reputation. You can customize all nine slides with your own branding and content.

9. Remote Team Working Agreement Presentation

This remote team working agreement template allows you to document your working agreement in a professional presentation design. It features 18 slides to help you cover key aspects of your working agreement such as communication and collaboration, working environment and more. Easily customize this template or keep the

paper presentation 2021

10. Product Presentation

Presenting a new product or idea is a big deal. This product presentation template utilizes the power of storytelling so you can eloquently highlight the benefits and value proposition of your product. It comes with vibrant and classy colors with the use of whitespace to guide the reader's eyes and keep them engaged. This presentation template is just one example of the many product presentation templates Visme has to offer.

paper presentation 2021

11. Market Analysis Presentation

Looking to present market trends to your boss or colleagues? This business presentation template has all the graphs and charts that you need to instantly breathe life into your data and engage your audience. It even comes with a map and icons that you can make interactive .

paper presentation 2021

12. Business Annual Report Presentation

This presentation template has a clean, corporate design and is great for presenting company information and financial numbers to your management or colleagues. Swap the images with your own and customize all elements with Visme’s drag-and-drop editor .

paper presentation 2021

13. Creative Product Presentation

Looking for a creative presentation template for your SaaS or technology product? This template might be exactly what you’re looking for. It has 10 slides with icons, graphs and even a nice thank you page. Customize it to fit your brand and gear up to impress your audience. You can also take a look at the other templates listed below for more creative presentation designs.

paper presentation 2021

14. Minimalist Fashion Design Presentation

This minimalistic presentation template will work well with all kinds of industries and purposes, especially fashion design. It has an elegant yet artistic design with images that you can swap for your own. Present your company in an attractive way and get potential investors interested .

creative presentation layout template - creative agency introduction

If you don’t have images on hand you can choose from a wide range of royalty free images from Visme’s asset library, or let AI help you to create your own.

Visme’s AI Image generator can help to provide a wide range of personalized images you can use in your presentation. Enter a prompt and choose from a range of output styles like photos, paintings, 3D graphics, icons, abstract art, and so much more.

15. Business Plan Presentation

Catch the eye of potential investors and score funding with this beautiful and polished business plan presentation template. It features 16 well-designed slides with graphs, icons, lists and other visual elements to help you organize and present your idea in a compelling way.

paper presentation 2021

In addition to creating a stunning presentation, Visme can also help to give you and your team a competitive edge. Use Visme analytics to make data-driven decisions.

That’s one of the ways Matt Swiren, Manager of Partnership Marketing for the Broncos, and his team use Visme to execute strategies and wow partners .

Matt uses the analytics provided by Visme to better understand how their presentations are viewed and understand the segments partners value the most. This empowers him to be more thoughtful with their future presentation flow, designs, layout and content, which in turn gives the team the power to construct better conversations and relationships.

Matt Swiren

Manager of Partnership Marketing, Broncos

16. Marketing Plan Presentation

This marketing plan presentation template is bright, upbeat and professional. If you’re tired of the boring PowerPoint presentations with plain bullets, this template is perfect for you. It comes with lots of icons, bold fonts and data widgets that help keep your audience engaged.

paper presentation 2021

17. SWOT Analysis Presentation

This professional SWOT analysis presentation template is ideal for presenting your company's strengths, weaknesses, opportunities and threats. This presentation theme is designed specifically for retail and eCommerce stores, but you can also use this presentation template for any other business.

paper presentation 2021

Visme also offers a range of intuitive collaborative features , allowing your team to work on SWOT analyses and other projects together. This helps eliminate silo mentalities and provides a more collaborative space.

With features like Workflows , where you can assign tasks, projects, and sections to team members, leave comments, manage user and privacy permissions, and work simultaneously on projects, you can achieve so much more.

18. Best Workout Apps Presentation

This fitness presentation template is energetic and features plenty of images that you can easily swap for your own. You can customize the colors, switch up the fonts or play around with all the free vector icons and graphics in Visme’s library.

paper presentation 2021

19. Architecture Studio Presentation

This elegant architecture presentation design template has a minimalistic look and feel with a sleek and classy layout, icons and thin, sans serif fonts. You can use this presentation template to showcase your company, team and services in a memorable way.

paper presentation 2021

20. Financial Report Presentation

If you’re on the hunt for a clean, professional-looking presentation template to present your company’s financials, this might just be it. This finance slideshow has an eye-catching color scheme, and features multiple graphs and charts to bring your data to life.

paper presentation 2021

21. Digital Marketing KPIs Presentation

This is the best presentation template for showing off your social media engagement , traffic and other metrics to your boss or colleagues. It has a professional color scheme that you can customize to fit your brand, statistics slides for displaying various KPIs and icons representing different social platforms.

Are you wondering where to get more free PowerPoint templates for your digital marketing presentation? There are hundreds of available templates in Visme that you can export to PowerPoint with one click.

paper presentation 2021

22. Technology Research Presentation

Present your research findings in an engaging way with this technology presentation template. With 4 beautiful slides designed by professionals, including one with a pie chart, this presentation template offers plenty of customization options and flexibility to fit your brand.

paper presentation 2021

23. Nature Background Presentation

This is the best presentation template for eco-friendly businesses or companies working in botanical and/or organic industries. This nature-themed slideshow features 4 beautiful slides with elegant fonts , a creative layout and even a contact page at the end with social icons.

paper presentation 2021

24. Travel Presentation

This presentation template is ideal for businesses in the travel industry, such as tour organizers. It features a beautiful landscape background in all 4 slides, along with relevant travel photos that you can easily swap for your own. It even has a slide for your different plans or packages to help you communicate your services better to potential customers and clients.

paper presentation 2021

25. Consulting Presentation

This upbeat, colorful sales pitch presentation template has 15+ slides that help you create a modern and impactful slideshow for your consultancy or any other business. You can customize this presentation template in Visme and swap the content for your own. Add free vector icons, images, data visualizations and more.

paper presentation 2021

26. Business Case Study Presentation

This colorful case study template is a must-have asset for teams who want to build trust with clients and integrate social proof into their marketing strategy. Customize the colors to fit your brand, easily replace the content, add more visuals and move around the slides to fit your company's unique needs.

paper presentation 2021

27. Wedding Photography Presentation

This elegant wedding photography presentation template is designed to help you showcase your best photographs with the use of full-sized and prominent background. You can swap the images and text for your own content and present your business in an impressive way.

presentation slides - services template visme

If you’re short on time to edit your own images before adding them to this presentation use Visme’s AI TouchUp Tools for a quick and stylish edit. Remove backgrounds, easter and replace objects, unblur, or sharpen images all inside of your Visme editor.

28. Investor Pitch Deck

This investor pitch deck template will accelerate your efforts to get funding and grab interest. It features a set of well-designed, polished slides with data visualizations, a pricing table and images that you can easily replace with your own in Visme's drag-and-drop editor.

paper presentation 2021

Keep your pitch deck and presentation informed up to date with dynamic fields . Use them to instantly update company information and data across multiple projects, all with the click of a button, without having to manually type in the information and details.

29. Mobile App Pitch Deck

This beautiful pitch deck template with 17 fully customizable slides was inspired by Airbnb and is perfect for presenting to potential investors in an impressive way. This hospitality presentation template has a modern design with a focus on apps, important numbers and overall strategy.

paper presentation 2021

30. CRM Go-To-Market Strategy Presentation

This go-to-marketing strategy presentation is suited to any product manager or marketing who needs to effectively lay out their plans to bring their products to the market. This template comes equipped with slides for market research, competition overview, product features and other crucial elements to complete your GTM strategy.

paper presentation 2021

31. Online Marketing Webinar Presentation

This webinar presentation template is designed to ensure a seamless presentation session. With its cool blue tones and effective use of white space, it allows you to professionally structure your content.

This template includes not only a well-organized layout but also timestamps to help you and your audience stay engaged and manage your time effectively. Each slide features a minimal design, providing ample space to showcase your knowledge without overwhelming the viewer.

paper presentation 2021

32. Cab Service Pitch Deck

This cab service pitch deck was inspired by Uber, and is just right for presenting a new app or service designed to help potential customers improve their lifestyle. This service presentation template highlights key features and stand-out differences up front, which increases your chances of scoring solid investment.

pitch deck template - uber ubercab hybcab

33. SaaS Pitch Deck

This SaaS pitch deck template is inspired by Front and comes with 18 professionally designed slides that have all the visual and text elements you need for a compelling business pitch . Customize the colors, icons and other elements to fit this presentation template to your brand.

pitch deck presentation layout template - upfront

34. Social Media Pitch Deck

If you’re looking for a pitch deck template that’s irresistible to potential investors, this is it. After all, it worked for Buffer! This Buffer-inspired presentation template is ideal for any marketing or SaaS product. It has 18 beautiful slides with data visualizations, timelines , headshots, icons and tons of other visual elements that you can customize with a few clicks.

paper presentation 2021

35. Influencer Marketing Pitch Deck

This powerful pitch deck template is inspired by Launchrock, and is designed with the purpose of helping your brand stand out from the competition. It has 16 professional and customizable slides with complete information that you can easily swap for your own content.

paper presentation 2021

36. Visual Brand Identity Presentation

Showcase your brand elements in style with this beautiful visual identity presentation template. Make sure your colleagues stay on the same page by communicating logo, font, imagery and other visual standards that help you stay consistent and strengthen your brand .

paper presentation 2021

37. Professional Soccer Team Sponsorship Presentation

For marketing and sales teams that focus on sports, you can utilize this professional soccer team sponsorship presentation to reach out to potential partners in exchange for resources or financial support. This template includes a brief overview of the benefits you'll provide to sponsors in exchange for their financial support of a sporting event, team, athlete, or league.

Feel free to customize it by adding additional pages to showcase your activation ideas, past campaigns, and sponsors. You can modify all elements, including logos, fonts, colors, and images, to match your team’s colors and branding.

paper presentation 2021

38. Corporate Sales Operational Report Presentation

Present your company's sales performance, strategies, and activities using this corporate sales operational report. This template includes key metrics, revenue figures, and key performance indicators met.

The template is designed to help you showcase major insights on the data collected and recommendations to optimize sales operations for decision-making and performance evaluation.

paper presentation 2021

39. Ecommerce Business Model Presentation

This business model presentation aims to help you showcase your company's core strategy and approach to generating sustainable revenue, serving both internal use and potential investors.

The presentation boasts a playful design, featuring a muted background with bright green highlights and occasional dark background slides to break the monotony as readers navigate through the content.

With 21 slides encompassing your company mission, product category, value proposition, revenue model, target audience profiles, competitor analysis, strategies, and financial projections, this template offers comprehensive coverage.

Moreover, this slide allows you to integrate video content directly from platforms like YouTube, Vimeo, or Loom, or upload videos directly to Visme.

paper presentation 2021

40. Company Win-Loss Analysis Report Presentation

Ditch the boring Excel sheets and opt for this stunning win-loss analysis presentation to showcase your company's findings in a concise and highly memorable manner. It features a bold yet minimalistic design, blending dark and bright blue and purple tones throughout.

Each slide is thoughtfully designed to highlight critical aspects of your win-loss analysis, covering key performance indicators, strengths, recommendations, competitive landscape, and market trends aimed at enhancing your company's performance.

paper presentation 2021

41. LittleBlue Brand Guidelines Presentation

This attractive food-themed brand guidelines presentation is fully customizable. You can change all the elements, such as logos, fonts, colors and images, and use this presentation template to communicate your own brand elements . It’s modern and visually appealing design will make your brand elements look even better.

paper presentation 2021

42. PixelGo Brand Guidelines Presentation

This modern brand guidelines presentation template will help you communicate your brand standards to your team or employees. It has a versatile design that works with all types of businesses and has all the slides, such as for your logos , typography and color palette.

best presentation templates - pixelgo brand-identity-visme

43. Talkie Brand Guidelines Presentation

This creative presentation template is great for showcasing your brand elements and standards in a memorable way. You can customize the color scheme, add your own typography and logos, and plug in your own content easily using Visme’s drag-and-drop editor.

best presentation templates - talkie brand-identity-visme

44. HanaEatery Brand Guidelines Presentation

If you own a shop, or better yet, eatery, this is the best presentation template for you. It features 10 professionally designed slides to help you showcase your brand elements in style. Customize the images, colors, logos, typography and more with just a few clicks in the Visme editor.

best presentation templates - Restaurant-brand-identity-visme

45. Atmoluxe Brand Guidelines Presentation

This creative brand guidelines presentation template has a futuristic design and can fit any type of business with just some quick customization. Swap the existing logos, icons, text and colors for your own content and create a powerful presentation to showcase your brand elements.

best presentation templates - atmoluxe brand guidelines visme

46. Creative Brief Presentation

This creative brief presentation template can help you communicate your brand style and design requirements to video editors, graphic designers, creative agencies and freelancers. Swap the existing images, icons, text and colors for your own content and create a branded creative brief.

paper presentation 2021

47. Project Management Presentation

If you're looking to impress your audience without breaking the bank, look no further! Our collection of the best PowerPoint templates, available for free download, will elevate your project management presentations to new heights.

This project management presentation template has a professional design and is perfect for all kinds of businesses. This project presentation design comes with a stylish timeline slide, a client overview slide, a budget slide and more to help you create the ultimate project management plan .

paper presentation 2021

48. UX Strategy Presentation

This modern UX strategy presentation is ideal for web developers and UX designers who want to present the progress of their UX projects or create a sales pitch for clients. This user experience presentation comes with 15+ slides, including a Gantt chart roadmap slide, and you can customize it to fit your business and design needs.

paper presentation 2021

49. Web Development Proposal Presentation

Pitch your ideas to clients and show them how you can help them achieve their website goals with this proposal presentation template. This presentation is crafted especially for web development companies, but any business can use it by simply replacing the text, colors and images inside.

paper presentation 2021

50. Human Resources Presentation

This HR report presentation template is ideal for corporate human resources teams, but any department or business can use it by customizing the content and design in Visme's presentation editor. The clean and sophisticated design of this template reflects your company's professionalism. Add your logo and visual elements to align this presentation template with your brand identity.

paper presentation 2021

51. Team Project Update Presentation

This project status update presentation template is designed with teams in mind, and helps project teams of all kinds and sizes report their progress in a visual and engaging way. Use this template for your own needs, and change the colors, fonts, text, visuals, icons and more in Visme's drag-and-drop editor.

paper presentation 2021

Best Presentation Templates for Training & Education

Tired of dull and uninspiring training presentations? Spice up your slides with our selection of creative PowerPoint templates, all available for free download. Whether you're writing a book report or preparing a lesson, these innovative designs will add flair and impact to your message, leaving a lasting impression on your students.

In this section, we have put together a list of the best presentation templates for business training, webinars, courses, schools and educational institutes.

Scroll down to find your pick or click through the menu below.

  • Business Studies Presentationu
  • General Culture Presentation
  • Literature Presentation
  • Current Events Presentation
  • Entrepreneurship Presentation
  • History Presentation
  • Science Presentation
  • Health Presentation
  • Media Presentation
  • Worldschooling Presentation
  • Life Skills Presentation
  • Book Report Presentation
  • Training Plan Presentation
  • Science Trivia Presentation
  • Lesson Plan Presentation
  • Group Project Presentation
  • Graphic Design Course Presentation
  • Technology Webinar Presentation
  • Entrepreneurship Course Presentation
  • Public Speaking Workshop Presentation
  • Digital Marketing Webinar Presentation
  • Remote Team Training Presentation
  • Sales Training Presentation
  • Organizational Culture Presentation

52. Business Studies Presentation

This simple digital marketing presentation template is great for presenting in class by a student or a teacher. It has a useful “what is” layout that helps with explaining definitions and how something works. Perfect for educational purposes and you can customize it however you want.

presentation slides - simple marketing presentation template visme

53. General Culture Presentation

This creative presentation template is based on the topic of art and graffiti, but you can customize it for any other subject or topic. It features 5 beautifully designed slides with ample visual elements, including a pros and cons comparison table , to make any kind of information look instantly engaging.

presentation topic ideas - graffiti art general culture presentation template visme

54. Literature Presentation

Educate your class on the life of a famous author, poet or personality like William Shakespeare with this creative presentation template. It features 4 well-designed slides, including one with a detailed timeline perfect for highlighting important events or details of someone’s life.

presentation topic ideas - william shakespeare literature presentation template visme

55. Current Events Presentation

Want to present a global, national or social issue in class? This current events presentation template for students and teachers is the perfect fit. It has 5 complete slides with a pros and cons table and also a quote that you can swap for your own with just a few clicks.

presentation topic ideas - current events presentation slides template visme

56 . Entrepreneurship Presentation

This is the best presentation template to introduce a concept or idea, especially if you’re presenting to students in an entrepreneurship or business class. It has a visually appealing design with background images , graphic elements and a bright color scheme that you can edit.

presentation topic ideas - entrepreneur work life presentation template visme

57. History Presentation

This dinosaur timeline presentation template is great for use in history class or even biology class. It features 4 creatively designed slides, including one with a colorful timeline, which you can customize with your own images, fonts, colors and content in the Visme editor.

presentation topic ideas - prehistoric timeline dinosaurs history presentation template visme

58. Science Presentation

Present science topics in class with this engaging presentation template that focuses on a space exploration theme. This is one of the many stylish interactive presentations templates Visme provides. You can customize this presentation template with your own colors, icons and text. Add animations and interactive links, duplicate slides and do more with Visme.

presentation topic ideas - moon landing science presentation template visme

59. Health Presentation

Customize this how-to presentation template for your next project in health class. This is the best presentation template to create awareness around an important health issue or even for educating the general public on first-aid or other health-related knowledge.

presentation topic ideas - how to dress a wound health presentation template visme

60. Media Presentation

Need a fancy timeline? This media presentation template has got you covered. Show how an idea, concept, product or any other object has evolved over time with this creative timeline presentation. Customize the colors, add your own images, change the font and much more.

presentation topic ideas - evolution of the projector media presentation template visme

61. Worldschooling Presentation

This worldschooling presentation template is perfect for education-related topics. It features 4 well-designed slides with maps, images, fun fonts and other visual elements that make it a great pick for topics that are to be presented in class by students or teachers.

presentation topic ideas - worldschooling education presentation template visme

62. Life Skills Presentation

This visually appealing presentation template is ideal for illustrating tips, tricks, how-to tutorials and other purposes that require several sections. You can easily customize and duplicate each slide, add or remove elements and swap the content for your own in Visme’s editor.

presentation topic ideas - how to do laundry life skills presentation template visme

63. Book Report Presentation

This stunning book report presentation template has all the slides you need to dive deep into themes, storyline and other elements. The nine slides feature a mix of text-based content and graphics, such as a visual timeline and mini infographics. Customize it with ease in Visme.

64. Training Plan Presentation

This is the best presentation template for training plans and courses. It has a set of 13 slides that help you organize the training, break it up into different sections, and communicate course objectives and training content in a visually engaging, effective way.

presentation slides - training plan template visme

After customizing your training presentation you can share it as a live webpage, or PowerPoint file or upload it to an LMS (learning management system) of your choice. Visme allows you to effortlessly download your presentation as an xAPI or SCORM file that is compatible with top LMS platforms.

65. Science Trivia Presentation

Whether you want to present some fun facts in the class or quiz your students, this science trivia presentation template is a great fit. You can customize the color scheme, change the fonts, plug in your own content and you’re good to go! Make use of data widgets and icons for more impact.

presentation slides - trivia template visme

66. Lesson Plan Presentation

Creating a lesson plan from scratch can be frustrating. Use this pre-designed presentation template with 8 handy slides to help you communicate lesson objectives, methods, assignments and more. You can easily customize the colors, fonts, icons and more with just a few clicks.

presentation slides - lesson plan template visme

67. Group Project Presentation

This group project presentation template is great for students working and presenting together. It has several slides that are all fully customizable, including one for team members. The data visualizations help you communicate stats and figures in an easy-to-understand and engaging way.

presentation slides - group project template visme

68. Graphic Design Course Presentation

This colorful graphic design course presentation is ideal for webinars , online courses, training sessions and even the classroom. It's visually engaging with intuitive use of icons, lots of white space and an upbeat, lively design. Use it as it is or customize it to fit your unique design and content needs.

paper presentation 2021

69. Technology Webinar Presentation

Looking for a creative technology presentation? Look no further than this technology webinar presentation template. Put together an informative and visually engaging presentation with professionally designed slides, lots of technology images and a geometic, futuristic design.

paper presentation 2021

70. Entrepreneurship Course Presentation

Educate your students and attendees on entrepreneurship with this informative presentation template. This template can be used in classrooms or for business trainings, webinars and online courses. It's chock full of data widgets, icons, charts and other visual elements, and also comes with tailor-made, original content to help guide your own.

paper presentation 2021

71. Public Speaking Workshop Presentation

Public speaking can be tough, which is why a presentation like this one can help you train the attendees effectively with its engaging design, data visualizations and bold images that instill confidence. Use this workshop presentation template as is, or customize it for any other topic.

paper presentation 2021

72. Digital Marketing Webinar Presentation

Break down the concept of digital marketing, ads, social media marketing and other concepts using this educational presentation template. This template can be used in schools and universities or in business training and webinars. It can easily be edited to fit your topic, content and design needs.

paper presentation 2021

73. Remote Team Training Presentation

This remote team training presentation template is incredibly useful for businesses that are transitioning to a partially or fully remote work environment. Your team needs to learn how to effectively manage a remote team , and this presentation can help you do just that. Use it as is, or tweak the content and design inside easily.

paper presentation 2021

74. Sales Training Presentation

Educate sales teams on how to improve their sales processes, polish their skills and bring in more revenue for the company with this sales training presentation template. This template is designed with a modern corporate look-and-feel with bold colors, lots of visuals and a sleek, sophisticated design.

paper presentation 2021

75. Organizational Culture Presentation

Nothing is more boring than a dry, plain-looking PowerPoint presentation. So, why not take things up a notch and create a bright, colorful presentation to keep your audience engaged till the very end?

This organizational culture presentation template can be used for training, webinars and the classroom alike. You can also use it for other purposes by editing the content and design. It comes with a nice process slide, images of people that you can easily replace and other useful visual elements.

paper presentation 2021

Best Presentation Templates for Nonprofit

We also have a list of the best presentation templates tailored to the needs of nonprofit organizations. Find your pick from a selection of presentation templates on wildlife conservation, pet adoption, nature and environmental issues, and more.

  • Art Project Presentation
  • Nonprofit Environmental Presentation
  • Nonprofit Annual Report Presentation
  • Pet Adoption Presentation
  • Wildlife Conservation Presentation
  • Animal Background Presentation
  • Education Support Program Presentation
  • Public Health Awareness Presentation
  • Breast Cancer Awareness Presentation
  • Poverty Alleviation Presentation
  • Women Empowerment Presentation
  • Mental Health Presentation

76. Art Project Presentation

This art project presentation is great for all kinds of nonprofit organizations, schools and even businesses. It’s full of creative data visualizations that you can customize and even animate. Whether you’re presenting an idea for an art competition or just reporting project status, this presentation template can easily fit your purpose.

presentation slides - nonprofit art template visme

77. Nonprofit Environmental Presentation

If you’re looking to create awareness about the environment or just require a nature-themed presentation template for your next project, this green slideshow might be just right. It features several slides designed with the environment in mind, with nature images and even data visualizations to help you communicate your cause and project updates.

presentation slides - nonprofit environmental template visme

78. Nonprofit Annual Report Presentation

This nonprofit annual report presentation template is perfect for showcasing those strong numbers and building your case for fundraising. You can swap the existing content, colors, images and any other visual element for your own in Visme’s intuitive presentation maker.

When creating a presentation for a nonprofit, which template is best for ppt? Choose one that’s versatile and offers easy customization options.

presentation slides - nonprofit report template visme

79. Pet Adoption Presentation

This adorable pet adoption presentation template can be customized for your own nonprofit organization with a few clicks. It features a handful of cute pet images, which you can easily replace with your own photos or the ones you choose from Visme’s free stock image library.

presentation slides - pet adoption slideshow template visme

80. Wildlife Conservation Presentation

Raise awareness about wildlife conservation or any other related cause with this customizable presentation template. The creative slides feature an effective blend of images, text and data visualizations to help you communicate all the right information in a visually engaging manner.

presentation slides - wildlife conservation template visme

81. Animal Background Presentation

This is another wildlife or animal related presentation template that you can use for your project, cause or nonprofit organization. You can replace the images with your own, change the color scheme and do much more in Visme’s drag-and-drop presentation software.

presentation slides - nonprofit animals template visme

82. Education Support Program Presentation

Show how your nonprofit or social project is making a difference in the lives of children with this education support program presentation template. You can also modify this template according to your own content and design needs, add images, icons and data visualizations, and download it in PowerPoint or PDF format.

Visme also allows you to share or download presentations in PowerPoint or PDF format.

paper presentation 2021

83. Public Health Awareness Presentation

This health awareness presentation is a great fit for government organizations, nonprofits and medical institutions that want to educate people on public health topics, such as COVID-19 and vaccines. Use this presentation template as is, or change the colors, text, visuals and icons inside to suit your own needs.

paper presentation 2021

84. Breast Cancer Awareness Presentation

Educate your audience on the topic of breast awareness, and encourage others to support your cause using this cancer awareness presentation template. This template already comes with a feminine color scheme fit for the topic of breast cancer, but you can modify it easily according to your content and design needs.

paper presentation 2021

85. Poverty Alleviation Presentation

Raise awareness, funds and support for your cause with this poverty alleviation presentation template. This template can be used by nonprofits, government programs and even businesses running corporate social responsibility projects. Customize the color scheme, fonts, text, images and other features of this presentation template, and use it to reach your nonprofit goals .

paper presentation 2021

86. Women Empowerment Presentation

Just like the subject of feminism and women empowerment, this presentation template is bold and powerful. Use it as is, or modify the content and design to suit your unique needs. This women empowerment presentation template can be used by nonprofits, feminist organizations and even businesses looking to educate their employees on gender and diversity topics.

paper presentation 2021

87. Mental Health Presentation

This mental health presentation can help you educate your audience on issues and topics that matter the most, such as psychological well-being and what to do if someone you love is affected by mental illnesses.

Use this presentation template as is to generate awareness or edit the content and design inside to suit your unique needs.

paper presentation 2021

Find the Best Presentation Template For You

There you have it, the best free PowerPoint templates for 2024!

Finding the right presentation template is the first step in creating a powerful slideshow. This list of the best presentation templates will help you get started.

What are you waiting for? Unleash your creativity with our curated collection of free downloadable creative PPT templates. From modern and minimalist designs to bold and artistic layouts, there's something for every presenter.

Sign up for Visme's presentation software today (it's free!) and start using your favorite template.

Create beautiful presentations faster with Visme.

paper presentation 2021

Trusted by leading brands

Capterra

Recommended content for you:

How to Make a Presentation Interactive: Best Tips, Templates & Tools

Create Stunning Content!

Design visual brand experiences for your business whether you are a seasoned designer or a total novice.

paper presentation 2021

About the Author

Mahnoor Sheikh is the content marketing manager at Visme. She has years of experience in content strategy and execution, SEO copywriting and graphic design. She is also the founder of MASH Content and is passionate about tea, kittens and traveling with her husband. Get in touch with her on LinkedIn .

paper presentation 2021

paper presentation 2021

Paper Presentation Instructions

Ismar sessions and presentation.

ISMAR will provide an online and an offline experience this year.  Authors can engage directly with the community during online sessions via Zoom and GatherTown and stay connected both synchronously and asynchronously between sessions and hosted events using Discord.  The session will be mainly aligned with the Italian time zone.  Since ISMAR is an international conference, the planning team is working to stretch sessions into the early morning and late afternoon hours to facilitate synchronous ISMAR experience slots for authors and attendees from Asia and the Americas.  

Despite being virtual, ISMAR wants to offer a scientific exchange equivalent to an on-site conference or as close to as possible. Therefore, all authors must present their papers online during the assigned session and time. We will announce the exact time slots and sessions for each author to present in an email no later than Sept. 17, 2021. The planning committee will accommodate each author’s time zone as best as possible to limit inconvenient presentation times. However, early morning and evening times may be necessary. Please contact us after receiving your assigned presentation date/time if additional accommodations are required.

Every session will include two parts, a presentation and a discussion. During the presentation part each author will have 12 minutes (+ 1 minute to switch presenters) to present their research and answer up to one clarification question from the session chair. The audience is encouraged to post questions in Discord during or after the presentation. After the presentation part the discussion part will begin where the authors and audience will meet in designated virtual areas to further discuss the research. Authors are expected to be available in the discussion area.  These areas will remain open until the next session starts (30 minutes). 

The technical support staff will record all presentations and publish them after a session has concluded. Attendees will gain access via the technical platform, and authors are highly encouraged to engage with the community asynchronously and discuss their research further via Discord. 

Presentation Template

All ISMAR 21 paper presentations are expected to use the ISMAR slide template for both their live presentation and the video backup — especially for the title and credits page.  Consistent use of the ISMAR template is important for consistent branding, and in the long run helps grow and sustain recognition of the quality of ISMAR’s scientific contributions.

Note that the template contains an ISMAR 21 branded “title slide” and also templates for several different types of typical “regular” slides. 

Video Backups

Although we seek live presentations, we cannot eliminate technical challenges entirely. Thus, each author must prepare a backup video and post this video unlisted on Youtube to ensure a smooth conference experience for the ISMAR community. Technical support will play the video if an author encounters technical challenges during the live presentation. 

Please follow the video guidelines on the ISMAR web page: https://ismar21.org/contribute/video-guidelines/

Some additional instructions:

  • Video may not exceed 12 minutes, the regular presentation time. 
  • Use the ISMAR template to prepare your video as provided here .
  • Limit the ISMAR-branded TITLE SLIDE duration to approximately 5 seconds. 
  • Consider adding a slide that appears directly after the ISMAR-branded TITLE slide that shows photos of the authors. 
  • Refrain from starting with a loud jingle or music, especially if it requires royalties. 
  • Since videos will be posted on YouTube, mind that all content is royalty-free (e.g., photos) and free of offensive text/messages. YouTube policies may block your video otherwise. Note that offensive text/messages should never be part of a professional presentation.
  • Please  ensure that your audio does not clip at 0 db. When using a computer-internal microphone the best recording gain for voice is -12 db.
  • Your video must have subtitles. Please review the video guidelines (see link above) for further information about subtitles. 

Please upload your video to YouTube following the video guidelines and share the link via PCS ( https://new.precisionconference.com/ ). Use the camera-ready form and the text field “URL to presentation video (unlisted Youtube link)” to submit your link no later than Sept. 22, 2021 .

Tips for Live Zoom Presentations

  • Stop any programs that use internet data such as Dropbox, Google Drive or any other cloud storage system.
  • If you can connect to your Router through a wired Ethernet connection, this will provide the highest bandwidth and most stable/reliable connection (unless you have a WiFi 6 enabled setup).
  • If a wired connection is not possible, please consider disabling other devices and also other processes/applications on your system, to allow for more available bandwidth during your stream. Please also get as close to your WiFi router as possible.
  • For those with unreliable internet connections – consider purchasing a 4/5g USB dongle as a failover device for your main internet connection. These can be setup to kick in automatically if your connection drops.
  • Please use mains power where possible.
  • Make sure to only open the applications and windows that you will need for your presentation. This will reduce the load on your computer and reduce the chance of video/audio issues.
  • If you plan to share your screen consider disable desktop notifications for the duration of your presentation.
  • Make sure to also close any unnecessary programs, especially any that can give you notifications (e.g Slack).
  • Minimum 720p.
  • We highly recommend using a third party webcam rather than the built in webcam on your laptop (at least for older or lower end laptops). The added flexibility of a USB attached webcam can be hugely valuable when trying to set up an effective recording environment. We highly recommend the Logitech C920 series.
  • It’s recommended to disable auto exposure & focus where possible and set these manually ahead of your pre-recording. This will prevent the video image from fluctuating due to small changes in lighting or refocusing due to movements.
  • Remember to look at the camera, not just your presentation/notes. You could add a prop to the camera that will allow you to easily focus on it (googly eyes on your webcam or a stuffed animal next to the camera that you can pretend is your audience). This is especially important if you have a built-in webcam that is at the bottom of the screen.

Presentation

  • Tablet/Phone – Consider using a tablet or phone as a companion device for your presentation. Most presentation software applications now come with a mobile app where you can view your speaker notes more conveniently.
  • Software – If you are It’s recommended to get familiar with the Zoom controls ahead of time, particularly the options for screen sharing and switching between shared applications.
  • If you have an external microphone, this can provide much better audio quality than built in laptop mics – USB microphones provide excellent quality whilst being very easy to setup and use.
  • Try to reduce any background noise to the bare minimum. Turn off things like Air Conditioning, Computers, Your Phone and other features of your location that might contribute to additional noise levels.
  • Consider closing windows and doors to reduce noise from outside.
  • Move to a room away from any louder sources of noise.
  • Rooms with hardwood floors, minimal soft furnishings or lots of glass (or other resonant materials/objects) can be a poor location for recording or broadcasting audio, due to increased reflections, echo and noise. Consider recording in a room with lots of soft furnishings, carpeted floors, curtains and other non reflective surfaces to provide the best quality audio for the recording. You can ‘treat’ a reflective space by adding additional blankets, cushions, pillows, towels, etc and placing these on the hard surfaces to reduce any reflections.
  • If you have audio in your presentation, please consider using headphones or earbuds so that the audio output from your laptop is not picked up by your mic.
  • Plugin any external light sources, if running on batteries, consider reducing brightness for a longer battery life
  • Choose a space with minimal backlight (no windows or other bright light sources behind you).
  • Sit facing the brightest source of light you have available, a darker background is preferable as this will make it easier to distinguish you from your surroundings.
  • Ambient light can vary due to changes in weather, clouds, and time of day, so it is recommended to use a continuous lighting source if possible. Close curtains and any other sources of light if possible.
  • A darker overall environment will mean the camera has to work harder to boost the light levels artificially, this will lead to a loss of quality for your feed. Position a lamp or other light source off to one side of you (but out of shot of the camera) this will provide a consistent light source throughout your presentation.

© 2022 by ISMAR

Sponsored by the IEEE Computer Society Visualization and Graphics Technical Committee and ACM SIGGRAPH

paper presentation 2021

PHILIPPINE EDUCATIONAL MEASUREMENT AND EVALUATION ASSOCIATION, INC.

Securities and exchange commission (sec) company registration no. cn201012088, tin 007-834-895-00.

paper presentation 2021

  ICEME 2021 Paper Presentation

International conference on educational measurement and evaluation, “assessment in the new normal:  issues, challenges, and prospects”  .

May 26-28, 2021

Virtual Conference via Microsoft Teams Live

Day 2, May 27

10:45-11:45

Concurrent Session A1:  Assessment of Learning during the COVID-19 Pandemic

Session Chair: 

Moderator: 

From Face-to-Face to Virtual Assessment: Changes in Student Assessment Practices during COVID-19 among Filipino Teachers

Richard DLC Gonzales, Ph.D.

Inno-Change International Consultants, Inc.

Student Assessment Changes During COVID-19:  A Sample Case from Selected Teachers of Gattaran East and Central Districts

Charito G.Fuggan, Ph.D.

Gattaran East District, Department of Education

Rogelio C. Lazaro, Jr., Ph.D.

Rhoda C. Lazaro, M.A.

Gattaran Central District, Department of Education

Reconfiguring Assessment in Online Education: Lens from Teachers and Learners during Pandemic

Jason V. Chavez

Zamboanga City State Polytechnic College

Concurrent Session A2: Development of Non-Cognitive Measures

Session Chair:

Development and Validation of the Social Awareness Competency Scale (SACS)

Chona T. Chin

Christine Joy A. Ballada

De La Salle University

Development and Validation of Perceived Academic Stress Scale

Diezer Nerwin A. Dimaano

John Jonathan L. Lazaro

Don Johnson C. Zabala

Philippine Normal University

Teachers’ Roles in Focus:

Factor Analysis of Captures from Student Evaluation of Teaching (SET)

Maria Ellen DLR. Alcomendras

Cebu Technological University

Norliza M. Nordan

Pamantasan ng Lungsod ng Maynila

Concurrent Session A3: Innovations in Classroom Assessment

Assessing Students’ Reflections on Historical Narratives through Meme Making

Araibo Jose D. Elumba

Philippine Science High School-Zamboanga Peninsula Region Campus

Learning Map Framework and Logical Reasoning in Contextualized Physics Problem Solving

Giovanni T. Pelobillo

University of Mindanao

Framework for Assessing Student Metacognition in the 21st century science learning: A Mixed Methods Perspective

Milano O. Torres, Ph.D.

Bicol State College of Applied Sciences and Technology

Antriman V. Orleans, Ph.D. Philippine Normal University

Josephine M. Ramos

Concurrent Session A4:  Language Assessment

Employers’ Perspectives on English-Major Graduates’ Attributes, Skills, and Personal Qualities

Jocelyn L. Gagalang, Ph.D.

University of Rizal System-Pilillia Campus

Exploring ESL Teachers’ Assessment Practices on Grading and Feedback

Eden Grace Irag Conopio

Written Corrective Feedback and Learner’s Motivation Towards

English Performance Tasks in Modular Distance Learning

Rogelio D. Emralino II

Talangan Integrated National High School

Lunch Break

Concurrent Session B1:  Learning in the New Normal

Challenges and Issues Faced by Science Educators on the Implementation of Printed Modular Distance Learning in Roxas, Palawan

Ronia Melecia R. Mosaso

Catherine Genevieve B. Lagunzad, Ph.D

Maria Isabel P. Martin, Ph.D.

Ateneo de Manila University

Teaching Critical Thinking Skills in The New Normal

Janneth Ong, Ph.D.

Jovelyn Delosa Ph.D.

Xavier University

Virtual Case Study Analysis Method: Module, Rubric and Reflection

Emerson G. Cabudol, Ph.D.

Centro Escolar University

Concurrent Session B2: Validation of Non-Cognitive Measures

Psychometric Evaluation of the COVID Stress Scales in a Filipino Sample

Benedict G. Antazo

Jose Rizal University

Exploring the Factor Structure of the Exercise Dependence Scale-Revised among Filipino Fitness Enthusiasts

Edith G. Habig

University of Santo Tomas

David Paul R. Ramos

Jessica May Guillermo

Factor Analysis of the Technology Readiness Survey (TRS) in the Pursuit of Digital Curriculum

Joanna Marie A. de Borja

Adonis P. David

Concurrent Session B3:  Assessment Practices in Schools

A Review of the Performance Assessment System at FEU-IE

Sandra Co Shu Ming

Far Eastern University

Evaluation of School’s Performance Assessment System: Towards Achieving Standards

Alelie B. Diato

Cavite State University- General Trias City Campus

Assessment Practices of Teachers Implementing Philippines and Singapore Elementary Mathematics Curriculum: Inputs to K-12 Mathematics Education

Joy Therese L. Villon

Southern Luzon State University

Concurrent Session B4:  Innovations in Pedagogy

A Phenomenological Study of Gamification in an ESL Class

Joannalyn A. Guba

Pedro Tuason Senior High School

Game based learning: Effects to student’s motivation and interest in learning

Glen Mangali, Ph.D.

Niña Khielle N. Abao

Colegio De San Juan De Letran

Margie M. Lepangge

Nobelen Joy M. Marsonia

Rizal Technological University

Improving Student’s Engagement n Online Class Using Digital Exit Tickets

Melandro D. Santos

Christian M. Gonzalez

Tondo High School

Transition to Breakout Rooms

Concurrent Session C1:  Assessment in the New Normal

The Role of Performance-Based Assessment in The New Normal Classroom

Accessible Resource, Augmented Learning (ARAL) Assessment System: A State University Perspective in the New Normal

Frankie Aspira Fran

Romblon State University

The Assessment of Decoding, Oral Reading Fluency, and Reading Comprehension in Sinugbuanong Binisaya, Filipino, and English during the COVID-19 Pandemic

Kathrina Lorraine M. Lucasan

University of the Philippines Center for Integrative and Development Studies

Concurrent Session C2:  Development of Non-cognitive Measures

The Development of a Ginhawa Scale: An Initial Validation

Teresita T. Rungduin, Ph.D.

Confirmatory Factor Analysis of the Self-Awareness Competency of Social and Emotional Learning Theory

Archibald S. Siason

The Development of Filipino Seeking and Granting Forgiveness Inventory (FIL-SAGFI)

Darwin C. Rungduin, Ph.D.

Concurrent Session C3: Assessing Educational Outcomes

Measuring Quality Education in Teaching Inclusive Education: A Systematic Review

Glen Mangali, Ph.D,

Angelika F. Cajurao

Louise Anne M. Cuysona

Jenny T. Paredes

Danilo R. Relles

Coping Behavior, Non-Academic and Academic Performance of Teenage Parents at NIPSC

Lennie S. Malubay

Northern Iloilo Polytechnic State College

Learning Mindsets and the Challenge of Academic Achievement among Filipino Students

Jason Alinsunurin, Ph.D.

Concurrent Session C4:  Educational Interventions

A Model for Intervention Materials in Preparing Junior High School Students in the Program for International Student Assessment

Marilyn Ubiña- Balagtas, Ph.D.

Enriching Learning Modules in Basic Education through Integration of ILSA Features

Dexter C. Ngo

Jacklyn C. Santiago

Obed Edum U. Baybayon

Rex Institute for Student Excellence 

Marilyn U. Balagtas

Research Competence and Productivity of Public School Teachers and Administrators

Mark T. Dasa

Concurrent Session D1:  Assessment in Flexible Learning

Assessment of Readiness for Online Learning of Senior High School Students

Ryan Ray M. Mata

Manila Adventist College

Development and Validation of a Faculty Performance Evaluation in a Flexible Learning Environment

Merlita C. Medallon, Ed.D.

Lyceum of the Philippines University-Laguna

Redesigning the Philippine Educational Placement Test to Accommodate the

Assessment Needs of Filipino Learners

Mary Anne Delavin

Danilyn Joy Pangilinan

Nelia Benito, Ph.D.

Bureau of Education Assessment, Department of Education

Concurrent Session D2:  Psychometrics

Psychologist in a Pocket:

Determining Principal Components of Depression via Text Analysis Data

Paula Ferrer Cheng

Roann Munoz Ramos

RWTH University Hospital

A Multidimensional Examination of Filipino Family Involvement in High School: Validation of the Philippine-Cebuano Version of the Family Involvement Questionnaire

Joseph C. Pasco

Casisang National High School

Psychometric Analysis of a Mathematics Achievement Test based on the Most Essential Learning Competencies

Sherwin Vill S. Soto

Concurrent Session D3:  High-Stakes Assessment

Using a Learning Management System Platform in Developing the Computer-Based English Proficiency Test

Jerreld Romulo

Nelia Benito

Design and Validation of the College Readiness Test (CRT) for Filipino K to 12 Graduates

Antonio Tamayao, Ph.D.

Rudolf Vecaldo

Jay Emmanuel Asuncion,

Maria Mamba

Febe Marl Paat

Editha Pagulayan

Cagayan State University

Making the Case for Universal School-Based Mental Health Screening in the Philippines

Carmelo Callueng, Ph.D.

Rowan University

Maryfe M. Roxas, Ph.D.

Violeta Valladolid, Ph.D.

Francis Ray D. Subong

Iloilo National High School

paper presentation 2021

58th DAC | Best Paper/Presentation Nominations

Best paper candidates.

Architecture-aware Precision Tuning with Multiple Number Representation Systems

Distilling Arbitration Logic from Traces using Machine Learning: A Case Study on NoC

DNN-Opt: An RL Inspired Optimization for Analog Circuit Sizing using Deep Neural Networks

Gemmini: Enabling Systematic Deep-Learning Architecture Evaluation via Full-Stack Integration

A Resource Binding Approach to Logic Obfuscation

BEST PAPER COMMITTEE

  • Jörg Henkel
  • Antun Domic
  • Monica Farkash
  • Sri Parameswaran

Diamond Event Sponsor

Siemens

Event Sponsors

ACM Sigda

Industry Sponsors

Ansys

  • Open access
  • Published: 10 September 2024

Mirror, mirror on my screen: Focus on self-presentation on social media is associated with perfectionism and disordered eating among adolescents. Results from the “LifeOnSoMe”-study

  • Hilde Einarsdatter Danielsen 1 , 2 ,
  • Turi Reiten Finserås 1 ,
  • Amanda Iselin Olesen Andersen 1 ,
  • Gunnhild Johnsen Hjetland 1 , 3 ,
  • Vivian Woodfin 2 , 4 &
  • Jens Christoffer Skogen 1 , 3 , 5  

BMC Public Health volume  24 , Article number:  2466 ( 2024 ) Cite this article

Metrics details

Social media use, perfectionism, and disordered eating have all increased over the last decades. Some studies indicate that there is a relationship between self-presentation behaviors and being exposed to others’ self-presentation on social media, and disordered eating. Studies also show that the relationship between focus on self-presentation and highly visual social media is stronger than for non-visual social media, hence facilitating upward social comparison. Nevertheless, no previous studies have investigated the link between adolescents’ focus on self-presentation and upward social comparison on social media, and perfectionism and disordered eating, which is the aim of the present study.

The present study is based on a cross-sectional survey from the “LifeOnSoMe”-study ( N  = 3424), conducted in 2020 and 2021. Respondents were high school students (mean age 17.3 years, 56% females) in Bergen, Norway. Multiple regression analysis was performed, where SPAUSCIS, a measure of self-presentation and upward social comparison, was the independent variable. Perfectionism and disordered eating were dependent variables. Self-reported age, gender, and subjective socioeconomic status were used as covariates, as well as frequency and duration of social media use. Regression models were performed to compare proportions across the median split of SPAUSCIS.

The multiple regression analysis showed that increased focus on self-presentation and upward social comparison on social media were positively associated with both perfectionism (standardized coefficient 0.28) and disordered eating. A stronger association for girls than boys was found for disordered eating (standardized coefficient 0.39 for girls and 0.29 for boys). There was no gender moderation for perfectionism.

Conclusions

Findings suggest that focus on self-presentation and upward social comparison on social media is associated with perfectionism and disordered eating. We recommend promoting a healthy use of social media. This could be established by increasing adolescents’ ability to reflect on and think critically about self-presentation and upward social comparison on social media.

Peer Review reports

Introduction

Growing up today means growing up in a highly digitalized world where social media and online communication plays an important role in adolescents’ lives. Social media can be defined as “highly interactive platforms via which individuals and communities share, co-create, discuss, and modify user-generated content” [ 1 , pp. 241]. Previous studies have largely focused on the temporal aspects of social media use, and some studies indicate that social media use is associated with more mental health problems and decreased well-being [ 2 ]. For example, there are reports that more time spent on social media is associated with symptoms of depression and anxiety [ 3 , 4 ], sleep issues [ 3 , 5 ], and body dissatisfaction [ 6 ]. However, not all research confirms these associations [ 7 , 8 ], and recent studies have indicated that the observed link between time spent on social media and mental health is too small to be of practical importance [ 9 ]. A recent longitudinal study found time spent on social media to be the least important factor in relation to adolescent mental health [ 10 ]. Nevertheless, there is an ongoing and almost ubiquitous concern regarding social media’s potential negative effect on mental health. Considering this, it is increasingly recognized that it is important to investigate more than adolescents’ time spent on social media, such as their usage patterns. After all, social media offers a range of opportunities, such as seeking out like-minded others or specific topics and inspiration, for example, for food, fitness, and a healthy lifestyle. Although inspirational hashtags and pictures may be positive to many adolescents, they also frequently present a “perfect” lifestyle and some of them could even be considered unhealthy inspirations.

  • Self-presentation

Self-presentation on social media has been highlighted as potentially important in connection with mental health and well-being among adolescents [e.g. 11 , 12 , 13 , 14 ]. Baumeister & Hutton [ 15 ] defined self-presentation as an individual practice related to how one presents oneself to others, motivated by a wish to make a socially desirable impression on others, and simultaneously, stay true to one’s beliefs and ideals. On social media, self-presentation may include presenting and sharing self-made content, posting of personal opinions, sharing online content of interest, and “selfies” and pictures [ 14 , 16 ]. An American report noted that adolescents are more engaged in self-presentation activities on social media than any other age group [ 17 ]. As increased independence from parents is an important developmental milestone for adolescents, external validation from others may be particularly important for this age group [ 18 ]. Feedback on social media posts through likes and comments, may therefore be an important source of external validation from peers. Considering this, it is likely that many adolescents put great importance on how they present themselves on social media. In addition, social media is a suitable arena for self-presenting activities, as it gives the adolescent control over what, when and how to present themselves on the platform of their choosing [ 12 ]. Functions such as likes, comments, followers [ 19 ], and other measures of engagement, which are implemented on many social media platforms in one form or another, give ample opportunity for immediate feedback on posted content. Hence, this provides cues of social desirability and direction to align future social media posts with how the adolescents prefer to present themselves on these platforms [ 12 ]. These features of social media, in addition to the ability to reach a large and varied audience, may serve to facilitate self-presentation [ 20 ].

Self-presentation behaviors [e.g. 13 , 14 ] on social media are closely connected to focus on self-presentatio n [ 12 , 21 , 22 ]. Focus on self-presentation consist of caring about how you present yourself on social media, e.g., retouching pictures before posting them, caring about having a nice social media feed or striving for positive feedback on your social media posts, and can be independent of how much or how often a person post something [ 12 , 21 ]. As such, focus on self-presentation differs from self-presentation behaviors, which have been more extensively researched [e.g. 13 , 14 ]. A study showed that many adolescents have a desire to focus less on their self-presentation on social media, but that they think it is hard to resist the pressure of having a good feed and receiving positive feedback such as likes, comments, and followers [ 23 ]. A higher focus on self-presentation has been linked to the use of highly visual social media platforms like Instagram, TikTok and Facebook, rather than less visual platforms [ 12 ].

Likewise, use of social media has been linked to more social comparison, and in particular upward social comparison [ 24 , 25 ]. Social comparison is the propensity to compare one’s characteristics to other people to obtain information about how we are doing relative to others [ 26 ]. Upward social comparison occurs when one compares oneself to someone perceived as better or with higher status than oneself, which may be especially prevailing on social media. One study found that social media users mostly presume that other users have better lives than themselves [ 27 ]. Moreover, following a large number of people on social media increases the reference group to which adolescents compare themselves, and may include high-status people like “influencers” and celebrities [ 28 ]. Upward social comparison has been reported to be associated with more negative feelings such as depression and lower life satisfaction [ 11 , 29 ], and more body dissatisfaction [ 30 ]. Hawes et al. [ 31 ] also found that preoccupation with appearance comparison on social media was linked to symptoms of anxiety and depression among adolescents. Thus, while self-presentation on social media may not be harmful, feedback-seeking and upward social comparison may be damaging to mental health.

  • Perfectionism

In addition to being a central period for self-presentation activities, adolescence seems to be a particularly susceptible period for the development of perfectionism. Perfectionism is a personality disposition that may be defined as the tendency to set unrealistically high performance standards and striving for flawlessness [ 32 ]. Perfectionism is thought to be a disposition largely consolidated in adolescence as a part of a general identity formation [ 18 ].

Over the last 30 years, there has been an increase in perfectionistic personality traits among young adults [ 33 ]. Curran & Hill [ 33 ] hypothesize that this might be a consequence of the rise of a competitive cultural trend, and also the advent of social media in young peoples’ lives. As social media gives adolescents control over how they self-present, social media also allows them to create a (highly) specific and “ideal” image of themselves. Considering these perspectives, Curran & Hill [ 33 ] suggest that young people perceive their social context as more demanding and subsequently believe others will evaluate them more harshly. An experimental study investigating the effect of selfie taking and posting on social media on women’s mood and body image, concluded that the psychological states subsequent of posting the selfies, was related to self-consciousness and/or fear of being negatively evaluated [ 14 ]. Thus, adolescents of today may to a larger extent strive for perfectionistic self-presentation in order to secure acceptance among peers than older generations. Hewitt et al. [ 34 ] suggested the concept of perfectionistic self-presentation and argued that this is a maladaptive self-presentation style. One facet of perfectionistic self-presentation is perfectionistic self-promotion, which includes proclaiming and displaying one’s perfection [ 34 ]. Through features such as likes, comments and followers, social media may be a key arena for perfectionistic self-presentation and self-promotion, and hence a way of seeking external validation and approval in a socially acceptable way among adolescents.

A study found that perfectionistic concerns predicted longitudinal change in self-presentation and that perfectionistic self-presentation was linked to decreased well-being [ 35 ]. Hence, perfectionistic concerns indirectly affected subjective well-being through self-presentation [ 35 ]. Perfectionistic self-presentation also predicted changes in both positive and negative affect [ 35 ]. In a meta-analysis, perfectionism was found to be positively associated to different psychological disorders and symptoms, including body dissatisfaction, and eating disorders [ 36 ].

  • Disordered eating

Previous research has linked disordered eating to self-presentation [ 25 ] and to perfectionism [ 36 , 37 , 38 ]. A person with disordered eating will be obsessed with food and have constant thoughts about eating, body shape, weight, and food. Symptoms of disordered eating above a certain level may constitute an eating disorder according to the criteria in the Diagnostic and Statistical Manual of Mental Disorders (DSM 5th Ed.) [ 39 ] and the International Classification of Mental and Behavioral Disorders (ICD-10) [ 40 ]. A meta-analysis reported that over the last 20 years, there has been an increase in the weighted means of point eating disorder prevalence from 3.5% for the years 2000–2006 to 7.5% for the years 2013–2018 [ 41 ]. The prevalence for eating disorder was consistently higher among women compared to men regardless of timeframe (lifetime, 12-months, point prevalence). In the same meta-analysis, the authors also stressed the finding that eating disorders are highly prevalent in adolescence, with an estimated point prevalence between 6% and 8% [ 41 ].

As a great deal of content on social media promotes pictures of healthy food, diets, exercise, and appearance-focused images and idealized bodies, concerns have been raised that social media may contribute to body image concerns and disordered eating, especially among adolescents [ 42 , 43 ]. A systematic review, conducted by Holland & Tiggemann [ 43 ] showed that exposure to content on Facebook, in particular photo-based activity, was positively associated with negative body image and disordered eating behaviours in children, adolescents, and young adults. Another study found similar results; more exposure to appearance-related pictures on Facebook was associated with self-objectification, weight dissatisfaction, thin ideal internalization, and drive for thinness among girls [ 44 ].

Similarly, research indicates that exposure to others’ “perfect” self-presentations on social media may reinforce one’s own body image concerns and disordered eating [ 24 , 25 ]. Fardouly et al. [ 24 ] investigated young adult women’s appearance comparisons in different contexts in everyday life. They found that most of the appearance comparisons were made in person and on social media, and that the participants made relatively more upward appearance comparisons on social media than in person. They also found that upward appearance comparisons made on social media were associated with more body dissatisfaction than in person. In addition, upward appearance comparisons on social media yielded more thoughts about dieting than in person comparisons, but no difference in the likelihood of dieting-behaviours [ 24 ].

Furthermore, Rodgers et al. [ 25 ] found that social media use was positively correlated with higher internalization of appearance ideals, including a higher tendency to engage in appearance comparison, body dissatisfaction, muscle change behaviours and dietary restraints among both boys and girls. In addition, the internalization of social media ideals, the muscular ideals and appearance comparisons, were positively associated with body dissatisfaction, muscle change behaviours and dietary restraints. Other research has reported similar results [ 6 , 45 ]. Mclean et al. [ 45 ] found for instance, that self-presentation on social media was associated with internalization of social media ideals, and that the internalization mediated the effect of social media on appearance upward comparison and body dissatisfaction. A scoping review conducted by Dane & Bhatia [ 46 ] also reported that in cases where social media use led to eating disorder, the thin/fit body ideal internalization and social comparison often functioned as mediating pathways.

Theoretical framework, summary and the current study

The Tripartite Influence Model (TIM) may serve as a theoretical framework linking the concept of focus on self-presentation and upward social comparison on social media, with perfectionism and disordered eating [ 47 ]. The Tripartite Influence Model is a framework that can be used when exploring the relationship between social media use and body dissatisfaction. It proposes that pressures from peers, family and media makes one conform to certain appearance ideals, which can lead to internalization of body ideals, followed by physical appearance comparison with others [ 48 ]. This study’s focus on self-presentation and upward social comparison on social media, aligns with the Tripartite Influence Model’s emphasis on how media and peers (e.g. to what content that receives positive feedback from peers), may contribute to adolescents’ perception of ideal body standards. Findings indicate that higher focus on self-presentation is more strongly linked to visual social media platforms than less visual platforms [ 12 ]. This support the Tripartite Influence Model theory that media pressure, especially through highly visual social media, leads to increased body ideal internalization an upward comparison with others. Additionally, the association between social media use and disordered eating can be understood through pressure to conform to societal ideals, such as body ideals, as proposed in the Tripartite Influence Model. Perfectionism, which is linked to disordered eating [ 36 , 37 , 38 ], may be driven by similar societal pressures.

Research on adolescents’ use of social media is increasingly shifting focus away from looking merely at time spent to include potential consequences of specific aspects of adolescents’ social media usage patterns [ 2 ]. The use of social media, perfectionism, and disordered eating have all increased over the last decades [ 33 , 41 , 49 ]. Studies indicate a relationship between being exposed to how others present themselves on social media and body dissatisfaction and disordered eating [ 24 , 25 , 43 ], and some studies have also investigated the relationship between self-presentation behaviors and body dissatisfaction [ 13 , 14 , 30 ]. Moving beyond self-presentation behaviors, such as the frequency or content of social media posts, one study showed that being preoccupied with appearance on social media, was associated with increased risk for problems like appearance related anxiety and disordered eating [ 22 ]. In two previous studies, we showed that preoccupation with likes and comments, retouching photos of oneself, deleting photos with too few likes, and upward social comparison, collectively referred to as “focus on self-presentation”, was associated with more symptoms of anxiety and depression [ 12 ] and that focus on self-presentation varied significantly between adolescents [ 21 ].

Hence, the aim of the present study is to investigated the link between focus on self-presentation on social media, and perfectionism and disordered eating. Based on previous studies we hypothesize that focus on self-presentation and upward social comparison is positively associated with (i) perfectionism and (ii) disordered eating, and (iii) self-reported diagnosis of an eating disorder.

Materials and methods

Study sample.

This study is based on data from the “LifeOnSoMe” study carried out at public senior high schools in Bergen, Norway. Pupils aged 16 or older were invited to participate, giving an age range from 16 to 21 years old. Information about the survey was conveyed both by the teacher and digitally. The online web survey was conducted digitally. One school hour was set aside for carrying out the survey. The total number of eligible participants was 3,424 (mean age was 17.3 years (standard deviation 1.0)), and 56% ( n  = 1916) of the participants were girls. This study included data from two survey waves conducted in September-October 2020 and June-September 2021. For participants who responded in both waves, only their 2020 responses were used in this analysis. The response rate was 53% in 2020 and 35% in 2021. The research data was stored on secure storage facilities located at the Norwegian Institute of Public Health, which prevent the authors from providing the data as supplementary information, according to the General Data Protection Regulation (GDPR). Only researchers with approval from the Regional Ethical Committee had access. The study was approved by the Regional Ethical Committee, and is in accordance with the General Data Protection Regulation. Additional information about the study is available elsewhere [ 23 , 50 ].

Self-reported sociodemographics

The participants reported their age, gender, and subjective socioeconomic status. A small proportion of the participants did not state their age ( n  = 157). For gender, participants could choose between three options: “girl”, “boy”, and “other/non-binary”. Because too few participants (< 50) answered “other/non-binary”, these were excluded from the data set due to privacy concerns. Relative socioeconomic status was assessed by asking the participants to estimate how economically well off their families are compared to others, ranging from «very poor» (scored 0) to «very well off» (scored 10).

Amount of social media use

Two questions were included related to social media use in general: “How often do you use social media?” and “On the days that you use social media, approximately how much time do you spend on social media?”, giving an estimate of the frequency and duration of their usage, respectively. For frequency, the response alternatives were “almost never”, “several times a month, but rarer than once a week”, “1–2 times per week”, “3–4 times per week”, “5–6 times per week”, “every day”, “several times each day”, and “almost constantly”. In the present study, we differentiated between “daily or less”, “many times a day”, and “almost constantly”. For duration, seven response alternatives ranging from “less than 30 min” to “more than 5 h” were available. In the present study, we differentiate between “<2 h”, “2–4 h”, “>4–5 h”, and “>5 h”.

Independent variable: Self-Presentation and Upward Social Comparison Inclination Scale (SPAUSCIS)

The items used to assess upward social comparison and aspects of self-presentation were developed based on focus group interviews with senior high school pupils [ 23 ], and have been shown to have adequate psychometric properties in both this sample [ 21 ] and elsewhere [ 12 ]. Cronbach’s \(\alpha\) was 0.87, indicating a very good internal consistency. The results of an exploratory factor analysis (EFA) and a confirmatory factor analysis (CFA) for the SPAUSCIS have been reported in a previous publication based on the “LifeOnSoMe”-data [ 21 ]. Also, EFA and CFA was investigated in another, smaller sample of senior high school students [ 12 ]. The results from both studies strongly suggested a unidimensional scale and the fit indices from CFA were all considered good. Examples of items included in SPAUSCIS are “I retouch images of myself to look better before I post them on social media”, “I use a lot of time and energy on the content I post on social media”, and “The response I get for what I post (images/status updates/stories) impacts how I feel”. The response categories were “not at all”, “very little”, “sometimes/partly true”, “a lot”, and “very much”, coded 1–5. The mean summed score thus ranges from 1 to 5, with higher scores indicating a higher focus on self-presentation and upward social comparison on social media.

Dependent variables: Perfectionism and disordered eating

Perfectionism (edi-p).

Perfectionism was assessed by the 6-item perfectionism scale in the Eating Disorders Inventory (EDI) for children and adolescents [ 51 ]. The perfectionism items (EDI-P) are usually rated on a 6-point Likert scale. In the present study, however, the response options were “not true” (scored 0) “sometimes true” (scored 1), and “true” (scored 2) in accordance with the version employed in the youth@hordaland survey [ 52 ]. This yields a potential score of 0–12 when the items are summed. Previous research has found that the EDI [ 53 ] and EDI-P [ 54 ] have satisfactory psychometric properties in similar populations. Cronbach’s \(\alpha\) was 0.72 in the present study, indicating acceptable internal consistency.

Eating Disturbance Scale (EDS-5)

Symptoms of disordered eating was assessed using the Eating Disturbance Scale (EDS-5) [ 55 ]. EDS-5 consists of five questions specifically related to eating, such as comfort eating (item 2) and strict dieting in order to control ones eating habits (item 4). The response options are “not true” (scored 0) “sometimes true” (scored 1), and “true” (scored 2), and the summed scored ranges between 0 and 10. The questionnaire have shown adequate psychometric properties and convergent validity in previous research [ 55 , 56 ]. Cronbach’s \(\alpha\) was 0.78 in the present study, indicating an acceptable internal consistency.

Operationalization of EDI-P and EDS-5

For the purposes of the present study, both EDI-P and EDS-5 were used as continuous measures, as well as dichotomous variables, differentiating between low and high scores based on the 90th percentile. The chosen cut-off point is informed by previous research which suggest this to be an adequate delineation for mental health problems [ 52 , 57 ].

Diagnosis of eating disorder

For the participants participating in the study in 2020, self-reported psychiatric diagnoses were available ( n  = 1978) using a pre-defined list adapted to fit this age-group. Initially, the participants had to answer “yes” or “no” to the question “Have you ever received a diagnosis for a mental health problem?”, followed up by a list of 11 possible different diagnoses for those who endorsed the initial question. The list was based on a similar operationalization used in a large population-based studies [ 58 , 59 ]. The list contained no definition of the included disorders or conditions. For this study, the participants who chose “Eating disorder” ( n  = 36; 1.8%) from the list were identified as having been diagnosed with the condition, and all others were designated as not having received the diagnosis.

Statistical procedure

First, summary statistics of the included variables for the whole sample were estimated across the median-split of SPAUSCIS and presented in Table  1 . For categorical variables, the number and proportions were estimated, and the mean and standard deviation (SD) was estimated for continuous variables. Comparisons across the median-split of SPAUSCIS was done using Pearson’s chi-squared tests for categorical variables, and Wilcoxon rank sum tests were used for continuous variables. Then, two simple linear regression models were estimated using SPAUSCIS as an independent variable and (a) score on perfectionism (EDI-P) and (b) score on disordered eating (EDS-5) as dependent variables, respectively. The scores of the dependent variables were standardized (Z-scored) to ease interpretation of the resulting coefficients. Potential gender-moderation was investigated by entering genderxSPAUSCIS in both models as an interaction term into the model. The interaction term was considered statistically significant with a p-value of < 0.05, and if significant, results from the linear regression model were then presented separately for girls and boys. Linearity of the association between SPAUSCIS and the dependent variables were investigated using restricted cubic splines with four knots. Next, two gender-specific multiple logistic regression models were estimated using the median-split of SPAUSCIS as the main independent variable, and the 90th percentile score on (a) perfectionism (EDI-P) and (b) disordered eating (EDS-5) as dependent variables, respectively. Both models were adjusted for usual amount of social media use and socioeconomic status, and the results are presented as odds ratios with corresponding 95% confidence intervals. The median-split of SPAUSCIS were used in these models for simplicity and ease of interpretability. In post-hoc analyses, we did however, investigate the association between SPAUSCIS as a continuous measure and the 90th percentile score (a) perfectionism (EDI-P) and (b) disordered eating (EDS-5) as dependent variables, respectively. This was done using logistic regression analyses with restricted cubic splines to test for non-linearity. Both these models were adjusted for usual amount of social media use and socioeconomic status, and the results are presented in-text as odds ratios for trends with corresponding 95% confidence intervals. Finally, we investigated the association between the median-split of SPAUSCIS and self-reported eating disorder using simple logistic regression. No adjustments or investigation of potential gender-moderation was included for the latter analyses as the number reporting eating disorder ( n  = 36) limited the statistical precision. Missing data ranged from n  = 2 (0.1%) to n  = 55 (1.6%) across analyses, and pairwise deletion was applied to ensure the highest number of observations in each analysis.

Descriptive statistics of the included variables are presented across the median split of score on SPAUSCIS in Table  1 . For all of the included variables, there were significant differences between the SPAUSCIS-groups (all p-values < 0.001). The group with median or above scores on SPAUSCIS were more likely to be girls, more likely to use social media more often and for a longer duration but reported a slightly lower subjective socioeconomic status. Furthermore, they were more likely to report higher scores on perfectionism (EDI-P) and disordered eating (EDS-5).

Results from gender-specific multiple logistic regression models with median-split of SPAUSCIS as dependent variable, and the 90th percentile score on (a) perfectionism (EDI-P) and (b) disordered eating (EDS-5) as dependent variables is presented in Table  2 . For boys and girls, scoring on or above the median on SPAUSCIS was associated with increased odds for both dependent variables. For both perfectionism and disordered eating, the models are adjusted for social media use and socioeconomic status. In the post-hoc analyses using SPAUSCIS as continuous variable, the odds ratios (OR) in relation to perfectionism were 1.88 (95% CI 1.43–2.47, p  < 0.001) and 1.77 (95% CI 1.44–2.17, p  < 0.001) for boys and girls, respectively. For disordered eating, the corresponding ORs were 1.94 (95% CI 1.40–2.68, p  < 0.001) for boys and 2.00 (95% CI 1.72–2.32, p  < 0.001) for girls. Using restricted cubic splines, we did not find evidence for non-linearity in the post-hoc analyses.

There was a significantly higher odds of reporting being diagnosed with an eating disorder among those scoring median or above on SPAUSCIS (crude OR 3.32; 95% CI 1.58–7.84; p  = 0.003).

figure 8

Association between focus on self-presentation and perfectionism and disordered eating. Linear regressions with restricted cubic splines. Note: Figure 1: SPAUSCIS: Self-Presentation and Upward Social Comparison Inclination Scale; EDI-P: Eating Disorders Inventory-Perfectionism; EDS-5: Eating Disturbance Scale-5

Figure 1 presents findings from linear regression models with mean score on SPAUSCIS as the independent variable and the standardized (Z-scored) score on (a) perfectionism (EDI-P) and (b) disordered eating (EDS-5) as dependent variables. For both dependent variables, a potential gender moderation of the association with SPAUSCIS was investigated, and potential non-linearity was investigated using restricted cubic splines with four knots. For disordered eating, a significant gender moderation was found, and the association was stronger for girls than boys. For perfectionism, no evidence for a gender moderation was found. For both dependent variables there was a significant linear association with self-presentation equal to a low-to-moderate effect size.

Overall findings

In the present study we investigated the potential association between focus on self-presentation and upward social comparison on social media, and perfectionism and disordered eating. As hypothesized, we found evidence for consistent positive associations. Increased focus on self-presentation and upward social comparison was associated with increased levels of both perfectionism and disordered eating with a small-to-medium effect size. For perfectionism, the associations were similar for both boys and girls, while we found evidence of a gender moderation for disordered eating. Specifically, the association with disordered eating was somewhat stronger for girls compared to boys. For self-reported eating disorder, we also found a positive association with focus on self-presentation and upward social comparison. Focusing on how the adolescents relate to self-presentation on social media, the study gives new insight into important aspects of usage patterns of social media. It also provides new insight into potential gender differences in focus on self-presentation and upward social comparison on social media, and social media´s potential role in development of disordered eating. These findings are pertinent in a public health perspective and may help to inform efforts to mitigate these potential negative effects.

Relation to previous perspectives and findings

Our findings are consistent with the Tripartite Influence Model, as our study revealed positive associations between focus on self-presentation and upward social comparison on social media, and both perfectionism and disordered eating. Individuals who focus on self-presentation and upward social comparison may be more susceptible to sociocultural pressures which may lead to a strive for perfection and conforming to unhealthy body ideals. Our findings underscore the potential role of sociocultural pressures in shaping body image dissatisfaction and disordered eating behaviors. Specifically, the positive association between focus on self-presentation on social media and perfectionism may have several explanations. Curran & Hill [ 33 ] argue that the increase in perfectionistic traits among young adults may be due to a response to cultural changes towards a more individualistic and competitive culture in Western societies. As social media is an important part of adolescents’ and young peoples’ lives, it is likely that the perfectionistic tendencies will affect self-presentation on these platforms as well. Curran & Hill [ 33 ] also suggest that the increase in perfectionism among young adults may be due to their perception of increased demands from the social environment. Self-presenting in a socially desirable way in general, and on social media specifically, may be a way to ensure social acceptance from peers. They further hypothesize that the fear of losing acceptance may increase perfectionistic traits [ 33 ]. Hence, increased perfectionism may be the reason for stronger focus on self-presentation on social media. However, since we cannot interpret the direction of the association from this study, focus on self-presentation may also increase adolescents’ perfectionistic tendency. As perfectionism is a personality trait that largely establishes during adolescence, it may be that the increased opportunity to self-present on social media, and thus focus on self-presentation, makes adolescents more susceptible for developing perfectionistic traits.

There is a lack of research on the relationship between focus on self-presentation on social media and disordered eating. Most of the research investigating this relationship have looked at being exposed to appearance-related self-presentation on social media and body dissatisfaction and disordered eating [ 24 , 25 , 43 ], in addition to self-presentation behavior [ 13 , 14 , 45 ], not the relationship between a person’s focus on self-presentation on social media and disordered eating. Our results indicate a positive relationship between focus on self-presentation on social media and disordered eating. Highly visual social media platforms that expose adolescents to “perfect” bodies through others’ self-presentation may constitute an important source of such exposure. Previous findings support that being exposed to body ideals, may lead to internalization of these ideals among adolescents [ 25 , 45 , 46 ]. Other findings also report that upward social comparison may be a potential consequence of the exposure to others’ “perfect” appearance related self-presentation [ 24 , 31 , 60 ], leading to body dissatisfaction [ 30 ]. Subsequently, some adolescents may be more preoccupied with eating, weight, body shape, and muscularity. This preoccupation could serve as a mitigation strategy to reduce the discrepancy between the adolescent’s perceived appearance and the ideal body and appearance of the reference person. Thereby reducing the negative body image and negative feelings produced from the upward social comparison.

Another explanation may be that adolescents with disordered eating already are more preoccupied than other adolescents with how they appear to others. Social media is an apt arena to self-present in an appearance-related and desired way, and could elicit wanted feedback from others through likes and comments. This may further reinforce the focus on self-presentation. A third potential explanation for this relationship is perfectionism as a conceivable mediating factor. As perfectionistic self-presentation can be understood as a maladaptive self-presentation style [ 34 ], perfectionism may lead to a strict view of what constitute a good-enough self-presentation. This may as well include the adolescent’s expectations and demands to their own body as thin or muscular, hence increasing the standards of flawlessness in their own appearance-related self-presentation on social media. If these expectations are too rigid, it might for some adolescents be a contributing cause in the development of disordered eating.

In relation to the association between focus on self-presentation on social media and disordered eating we found a stronger association for girls than boys. Hjetland et al. [ 61 ] found significant gender differences in how adolescents related to self-presentation on social media. Girls reported that they invested more time and energy on the content of their own social media posts. They used more filters to look better at least sometimes and reported feeling less satisfied with themselves because of other peoples’ social media posts. Girls also tended to ascribe more importance to the feedback they got on social media than boys. In general, the report showed that social media played a bigger part in the girls’ lives than the boys’, and that the girls placed more importance on what is happening on social media [ 61 ]. Hence, more importance placed on self-presentation on social media among girls, and social media playing a more important role in girls’ lives, may increase the focus on self-presenting in an ideal way, in addition to being stronger underlying causes in development of eating disorders for girls than for boys.

There may as well be other explanations for the gender difference we found. The objectification theory [ 62 ], suggests that women’s bodies are more often looked at, evaluated and potentially sexually objectified. Fredrickson & Roberts [ 62 ] further argue that these views make women internalize the observer’s perspective of themselves, and to some degree also socialize women to treat themselves as objects for the pleasure of others. The emphasis put on girls’ and women’s physical appearance, in particular, is well established in our culture [ 60 ]. Through social media’s feedback mechanisms, girls may be more encouraged than boys to self-present in an objectifying way.

Social comparison theory [ 26 ], and especially upward social comparison, is another possible explanation for the gender difference between focus on self-presentation on social media and disordered eating. Strahan et al. [ 60 ] found that when describing their physical appearance, women used significantly more upward social comparisons than downward social comparisons. Men, on the other hand, made more downward comparisons than upward. This tendency was not seen when women and men described other personal characteristics like social skills. For women, they also found that the more upward social comparison they made, the more negative statements they made about their body [ 60 ]. They proposed that ubiquitous appearance norms, mostly applying to women, disrupted strong self-enhancement behaviors [ 60 ]. Fardouly et al. [ 24 ] also found that women relied on upward social comparisons when comparing their appearances, and that doing this on social media was associated with more body dissatisfaction than in person. A proposed explanation for this is that women may experience a stronger discrepancy between themselves and women they see on social media compared to women they see in person [ 24 ].

Previous research on self-presentation behaviors has primarily focused on appearance-related self-presentation and upward social comparison [e.g. 24 ] and associated risk among girls, such as body dissatisfaction [ 13 , 14 , 30 ], thin ideal internalization and disordered eating behavior [ 25 , 44 ]. However, it is important to recognize that boys may also be affected by these issues, and a study showed that body dissatisfaction affected boys’ risk of engaging in disordered eating behaviors [ 63 ]. The current body ideals for boys emphasize muscularity [ 64 ], and Eisenberg et al. [ 65 ] found that muscle-enhancing behaviors are common among American adolescents, including both boys and girls. This were behaviors like dieting, exercising, and taking protein supplements or steroids, with the aim of increasing muscle size or tone. However, most of the behaviors measured were significantly more common among boys [ 65 ], and Compte et al.’s [ 64 ] investigation of muscle dysmorphia among young adult men indicated a prevalence of almost 7%. Hence, another explanation for the gender difference we found, may be that the EDS-5-questionnaire does not identify symptoms of drive for muscularity or muscle dysmorphia. In fact, muscle dysmorphia seems to be more of a concern than thinness and weight loss among boys [ 64 ]. The EDS-5 measures of symptoms of disordered eating are linked to preoccupation about weight loss, body shape and drive for thinness [ 55 ], and may therefore not fully capture the range of body image concerns among boys.

Implications

The present results demonstrate the need to address focus on self-presentation and upward social comparison on social media as potentially important factors for adolescents’ mental health. As such, promoting a healthy use of social media could be established through a focus on increasing adolescents’ ability to reflect on and think critically about self-presentation and upward social comparison on social media. Our results indicate a need for targeted interventions to promote healthy social media use and enhance adolescents’ critical thinking about self-presentation and underscores the urgency of public health initiatives. One public health approach would be to equip adolescents with critical thinking skills to navigate social media mindfully. In relation to appearance-related ideals, educational programs should address the unrealistic standards perpetuated online, while fostering resilience and promoting positive self-image. Educational programs and social media literacy programs in school have been suggested to increase adolescents’ reflections about their own and others social media use [ 42 , 66 , 67 ]. Gordon et al. [ 42 , 67 ] introduced a four-lesson social media literacy program in a junior high school that aimed to decrease body dissatisfaction, dietary restraints and focus on increased muscles among young adolescents. They found only a small effect of the intervention. The intervention did not focus on self-presentation and based on results from this study and previous research [e.g. 12 , 27 ], this would be an important topic to address for future interventions. Also, previous results suggest that interventions led by individuals who already have an established relationship with the adolescents and are familiar with their needs help facilitate discussions among the adolescents [ 42 ], and improve intervention outcomes. Teachers could therefore be considered effective social media educators, especially if social media literacy could be integrated in existing school subjects.

A study of university students showed that women who had a higher internalization of the thin-ideal, were more vulnerable to disordered body image and hence to appearance exposure in media [ 68 ]. They also found that body appreciation protected women from negative effects of the exposure [ 68 ]. Thus, developing social media literacy programs specifically focusing on the effects of self-presentation and upward social comparison could be an important target for interventions, and possibly reduce focus on self-presentation. Research [ 69 ] also suggest that increasing self-compassion is a useful strategy to prevent perfectionistic self-presentation on social media. As perfectionistic self-presentation is related to lower subjective well-being [ 35 ], this may also be a topic to address in an intervention aiming to reduce focus on self-presentation and upward social comparison on social media.

While our study adds to the knowledge base, future research should investigate the concept of self-presentation on social media more closely. It will be important to examine if different ways of self-presentation vary from each other. Previous research has investigated how people self-present, especially through the use of selfies [e.g. 70 , 71 ], and further research should investigate if taking pictures of oneself and posting them is dissimilar from other ways of self-presentation on social media when considering its association to mental health among adolescents. SPAUSCIS consist of only one item asking about specific ways of self-presenting (“I retouch images of myself to look better before I post them on social media”), thus future research on other ways of self-presenting behaviors should include self-presentation for example through pictures of other aspect of the adolescents’ life, like friends or hobbies, or through text only. Investigating focus on self-presentation on social media, perfectionism and disordered eating among younger adolescents than we included in our study will be important as the use of social media starts early and as disordered eating often emerges in adolescence [ 72 ]. Understanding at what age focus on self-presentation becomes more prominent for adolescents’ and potential gender differences regarding this, may also be important to pinpoint intervention opportunities.

Strengths and limitations

A major strength of the present study is that it is the first study to investigate the relationship between focus on self-presentation on social media, perfectionism and disordered eating. So far, the research on this has focused on self-presentation behaviors [e.g. 13 , 14 , 30 , 45 ] in addition to being exposed to others ’ (perfect) self-presentations and the prevailing body ideals [e.g. 24 , 25 , 43 ]. To our knowledge no previous study has examined the association between focus on self-presentation and perfectionism and disordered eating. In addition, the scales used in this study are well-established [ 55 , 54 , 56 ]. Also, the items of SPAUSCIS were derived from focus-group interviews with adolescents [ 23 ], which make them relevant for adolescents’ experiences related to self-presentation and social comparison on social media. Some limitations are also worth mentioning. The study is cross-sectional, thus we cannot determine causality between the investigated factors and mental health. Despite the sample being large, it is limited to high schools in Bergen, Norway. Consequently, the results may not be generalizable to other countries or cultures. Also, the participation rate was moderate (53% and 35%), which may impact the validity of our findings. However, associations are less vulnerable to bias caused by low participation rates than prevalence estimates [ 73 ]. Another limitation is that SPAUSCIS in this study does not differentiate between various methods of self-presentation. Consequently, we cannot conclude from this study whether specific types of self-presentation, such as taking selfies versus posting pictures of hobbies, have the same impact on perfectionism, eating disorders or disordered eating. Also, the use of self-reported amount of social media use has been shown to be biased in previous research and is not likely to be an accurate measure of actual use [ 74 ]. This may have impacted our ability to effectively account for the confounding effect of social media use. And finally, although EDS-5 is a well-established and validated measurement, the questionnaire does not cover specific symptoms of drive for muscularity and muscle dysmorphia.

While previous studies have focused on self-presentation behaviors, this study found that focus on self-presentation and upward social comparison on social media is positively associated with both perfectionism and disordered eating, as well as self-reported eating disorders among adolescents. As such, promoting a healthy use of social media could be established through increasing adolescents’ ability to reflect on and think critically about self-presentation and upward social comparison on social media. Our results underscore the importance of targeted public health interventions to promote awareness and healthy social media use among adolescents, emphasizing the need for educational programs that address focus on self-presentation, unrealistic appearance-related ideals and foster resilience and positive self-image.

Data availability

Explicit consent from the participant is required by the Norwegian Health research legislation and the Norwegian Ethics committees in order to transfer health research data outside of Norway. Ethics approval for this was also dependent on storing the research data on secure storage facilities located at the Norwegian Institute of Public Health, which prevent the authors from providing the data as supplementary information. Request to access these datasets should be directed to [email protected].

Kietzmann JH, Hermkens K, McCarthy IP, Silvestre BS. Social media? Get serious! Understanding the functional building blocks of social media. Bus Horiz. 2011;54(3):241–51. https://doi.org/10.1016/j.bushor.2011.01.005 .

Article   Google Scholar  

Schønning V, Hjetland GJ, Aarø LE, Skogen JC. Social Media Use and Mental Health and Well-being among adolescents – A scoping review. Front Psychol. 2020;11. https://doi.org/10.3389/fpsyg.2020.01949 .

Woods HC, Scott H. #Sleepyteens: social media use in adolescence is associated with poor sleep quality, anxiety, depression and low self-esteem. J Adolesc. 2016;51:41–9. https://doi.org/10.1016/j.adolescence.2016.05.008 .

Article   PubMed   Google Scholar  

Brunborg GS, Burdzovic Andreas J. Increase in time spent on social media is associated with modest increase in depression, conduct problems, and episodic heavy drinking. J Adolesc. 2019;74:201–9. https://doi.org/10.1016/j.adolescence.2019.06.013 .

Varghese NE, Santoro E, Lugo A, Madrid-Valero JJ, Ghislandi S, Torbica A, Gallus S. The role of Technology and Social Media Use in Sleep-Onset difficulties among Italian Adolescents: cross-sectional study. J Med Internet Res. 2021;23(1):e20319. https://doi.org/10.2196/20319 .

Article   PubMed   PubMed Central   Google Scholar  

Jarman HK, Marques MD, McLean SA, Slater A, Paxton SJ. Social media, body satisfaction and well-being among adolescents: a mediation model of appearance-ideal internalization and comparison. Body Image. 2021;36:139–48. https://doi.org/10.1016/j.bodyim.2020.11.005 .

Keles B, McCrae N, Grealish A. A systematic review: the influence of social media on depression, anxiety and psychological distress in adolescents. Int J Adolescence Youth. 2020;25(1):79–93. https://doi.org/10.1080/02673843.2019.1590851 .

Coyne SM, Rogers AA, Zurcher JD, Stockdale L, Booth M. Does time spent using social media impact mental health? An eight year longitudinal study. Comput Hum Behav. 2020;104:106160. https://doi.org/10.1016/j.chb.2019.106160 .

Orben A, Dienlin T, Przybylski AK. Social media’s enduring effect on adolescent life satisfaction. Proceedings of the National Academy of Sciences, 2019;116(21):10226–10228. https://doi.org/10.1073/pnas.1902058116 .

Panayiotou M, Black L, Carmichael-Murphy P, Qualter P, Humphrey N. Time spent on social media among the least influential factors in adolescent mental health: preliminary results from a panel network analysis. Nat Mental Health. 2023;1(5):316–26. https://doi.org/10.1038/s44220-023-00063-7 .

Frison E, Eggermont S. Harder, Better, faster, stronger: negative comparison on Facebook and adolescents’ life satisfaction are reciprocally related. Cyberpsychology Behav Social Netw. 2016;19(3):158–64. https://doi.org/10.1089/cyber.2015.0296 .

Skogen JC, Hjetland GJ, Bøe T, Hella RT, Knudsen AK. Through the Looking Glass of Social Media. Focus on Self-Presentation and Association with Mental Health and Quality of Life. A Cross-Sectional Survey-Based Study. International Journal of Environmental Research and Public Health, 2021;18(6):3319. https://www.mdpi.com/1660-4601/18/6/3319 .

Bij de Vaate NAJD, Veldhuis J, Konijn EA. How online self-presentation affects well-being and body image: a systematic review. Telematics Inform. 2020;47:101316. https://doi.org/10.1016/j.tele.2019.101316 .

Mills JS, Musto S, Williams L, Tiggemann M. Selfie harm: effects on mood and body image in young women. Body Image. 2018;27:86–92. https://doi.org/10.1016/j.bodyim.2018.08.007 .

Baumeister RF, Hutton DG. Self-Presentation Theory: Self-Construction anf Audience Pleasing. In Theories of Group Behavior , Mullen, B. & Goethals, G. R., Eds.; Springer New York, NY, USA, 1987. pp. 71–87.

Herring S, Kapidzic S. Teens, gender, and Self-Presentation in Social Media. London, UK: Elsevier Health Sciences; 2015. pp. 146–52.

Google Scholar  

Lenhart A, Purcell K, Smith A, Zickuhr K. Social Media & Mobile Internet use among teens and young adults. Washington, D.C.: Pew Internet & American Life Project; 2010. pp. 17–25. https://eric.ed.gov/?id=ed525056 .

Negru-Subtirica O, Pop EI, Damian LE, Stoeber J. The very best of me: Longitudinal associations of perfectionism and identity processes in adolescence. Child Dev. 2021;92(5):1855–71. https://doi.org/10.1111/cdev.13622 .

Eranti V, Lonkila M. The social significance of the Facebook like button. First Monday. 2015;20(6). https://doi.org/10.5210/fm.v20i6.5505 .

Schlosser AE. Self-disclosure versus self-presentation on social media. Curr Opin Psychol. 2020;31:1–6. https://doi.org/10.1016/j.copsyc.2019.06.025 .

Hjetland GJ, Finserås TR, Sivertsen B, Colman I, Hella RT, Skogen JC. Focus on Self-Presentation on Social Media across Sociodemographic Variables, lifestyles, and personalities: a cross-sectional study. Int J Environ Res Public Health. 2022;19(17):11133. https://www.mdpi.com/1660-4601/19/17/11133 .

Zimmer-Gembeck MJ, Hawes T, Scott RA, Campbell T, Webb HJ. Adolescents’ online appearance preoccupation: a 5-year longitudinal study of the influence of peers, parents, beliefs, and disordered eating. Comput Hum Behav. 2023;140:107569. https://doi.org/10.1016/j.chb.2022.107569 .

Hjetland GJ, Schønning V, Hella RT, Veseth M, Skogen JC. How do Norwegian adolescents experience the role of social media in relation to mental health and well being: a qualitative study. BMC Psychol. 2021;9(1):78. https://doi.org/10.1186/s40359-021-00582-x .

Fardouly J, Pinkus RT, Vartanian LR. The impact of appearance comparisons made through social media, traditional media, and in person in women’s everyday lives. Body Image. 2017;20:31–9. https://doi.org/10.1016/j.bodyim.2016.11.002 .

Rodgers RF, Slater A, Gordon CS, McLean SA, Jarman HK, Paxton SJ. A Biopsychosocial Model of Social Media Use and body image concerns, disordered eating, and muscle-building behaviors among adolescent girls and boys. J Youth Adolesc. 2020;49(2):399–409. https://doi.org/10.1007/s10964-019-01190-0 .

Festinger L. A theory of social comparison processes. Hum Relat. 1954;7(2):117–40. https://doi.org/10.1177/001872675400700202 .

Chou HT, Edge N. They are happier and having better lives than I am: the impact of using Facebook on perceptions of others’ lives. Cyberpsychol Behav Soc Netw. 2012;15(2):117–21. https://doi.org/10.1089/cyber.2011.0324 .

Castellacci F, Tveito V. Internet use and well-being: a survey and a theoretical framework. Res Policy. 2018;47(1):308–25. https://doi.org/10.1016/j.respol.2017.11.007 .

Nesi J, Prinstein MJ. Using Social Media for Social Comparison and Feedback-Seeking: gender and Popularity Moderate associations with depressive symptoms. J Abnorm Child Psychol. 2015;43(8):1427–38. https://doi.org/10.1007/s10802-015-0020-0 .

Scully M, Swords L, Nixon E. Social comparisons on social media: online appearance-related activity and body dissatisfaction in adolescent girls. Ir J Psychol Med. 2023;40(1):31–42. https://doi.org/10.1017/ipm.2020.93 .

Article   PubMed   CAS   Google Scholar  

Hawes T, Zimmer-Gembeck MJ, Campbell SM. Unique associations of social media use and online appearance preoccupation with depression, anxiety, and appearance rejection sensitivity. Body Image. 2020;33:66–76. https://doi.org/10.1016/j.bodyim.2020.02.010 .

Hewitt PL, Norton GR, Flett GL, Callander L, Cowan T. Dimensions of perfectionism, hopelessness, and attempted suicide in a sample of alcoholics. Suicide Life-Threatening Behav. 1998;28(4):395–406. https://doi.org/10.1111/j.1943-278X.1998.tb00975.x .

Article   CAS   Google Scholar  

Curran T, Hill AP. Perfectionism is increasing over time: a meta-analysis of birth cohort differences from 1989 to 2016. Psychol Bull. 2019;145(4):410–29. https://doi.org/10.1037/bul0000138 .

Hewitt PL, Flett GL, Sherry SB, Habke M, Parkin M, Lam RW, McMurtry B, Ediger E, Fairlie P, Stein MB. The interpersonal expression of perfection: Perfectionistic self-presentation and psychological distress. 2003. https://doi.org/10.1037/0022-3514.84.6.1303 .

Mackinnon SP, Sherry SB. Perfectionistic self-presentation mediates the relationship between perfectionistic concerns and subjective well-being: a three-wave longitudinal study. Pers Indiv Differ. 2012;53(1):22–8. https://doi.org/10.1016/j.paid.2012.02.010 .

Limburg K, Watson HJ, Hagger MS, Egan SJ. The relationship between perfectionism and psychopathology: a Meta-analysis. J Clin Psychol. 2017;73(10):1301–26. https://doi.org/10.1002/jclp.22435 .

Bardone-Cone AM, Wonderlich SA, Frost RO, Bulik CM, Mitchell JE, Uppala S, Simonich H. Perfectionism and eating disorders: current status and future directions. Clin Psychol Rev. 2007;27(3):384–405. https://doi.org/10.1016/j.cpr.2006.12.005 .

Sherry SB, Hewitt PL, Besser A, McGee BJ, Flett GL. Self-oriented and socially prescribed perfectionism in the eating disorder inventory perfectionism subscale. Int J Eat Disord. 2004;35(1):69–79. https://doi.org/10.1002/eat.10237 .

American Psychiatric Association. Feeding ans eating Disorders. In Diagnostic and statistical manual of mental disorders (5th ed.); 2013. pp. 329–354. https://doi.org/10.1176/appi.books.9780890425596 .

World Health Organization. F50-59 Behavioral syndromes associated with psychological disturbances and physical factors. The ICD-10 classification of mental and behavioural disorders: Diagnostic criteria for research; 1993. pp. 136–142.

Galmiche M, Déchelotte P, Lambert G, Tavolacci MP. Prevalence of eating disorders over the 2000–2018 period: a systematic literature review. Am J Clin Nutr. 2019;109(5):1402–13. https://doi.org/10.1093/ajcn/nqy342 .

Gordon CS, Jarman HK, Rodgers RF, McLean SA, Slater A, Fuller-Tyszkiewicz M, Paxton SJ. Outcomes of a Cluster Randomized Controlled Trial of the SoMe Social Media Literacy Program for Improving Body Image-Related Outcomes in Adolescent Boys and Girls. Nutrients, 2021;13(11):3825. https://www.mdpi.com/2072-6643/13/11/3825 .

Holland G, Tiggemann M. A systematic review of the impact of the use of social networking sites on body image and disordered eating outcomes. Body Image. 2016;17:100–10. https://doi.org/10.1016/j.bodyim.2016.02.008 .

Meier EP, Gray J. Facebook Photo Activity Associated with body image disturbance in adolescent girls. Cyberpsychology Behav Social Netw. 2013;17(4):199–206. https://doi.org/10.1089/cyber.2013.0305 .

McLean SA, Paxton SJ, Wertheim EH, Masters J. Selfies and social media: relationships between self-image editing and photo-investment and body dissatisfaction and dietary restraint. J Eat Disorders. 2015;3(1):O21. https://doi.org/10.1186/2050-2974-3-S1-O21 .

Dane A, Bhatia K. The social media diet: a scoping review to investigate the association between social media, body image and eating disorders amongst young people. PLOS Glob Public Health. 2023;3(3):e0001091. https://doi.org/10.1371/journal.pgph.0001091 .

Thompson JK, Heinberg LJ, Altabe M, Tantleff-Dunn S. Exacting beauty: theory, assessment, and treatment of body image disturbance. Am Psychol Association. 1999. https://doi.org/10.1037/10312-000 .

Burke NL, Schaefer LM, Karvay YG, Bardone-Cone AM, Frederick DA, Schaumberg K, Klump KL, Anderson DA, Thompson JK. Does the tripartite influence model of body image and eating pathology function similarly across racial/ethnic groups of White, Black, Latina, and Asian women? Eat Behav. 2021;42:101519.

Shewale R. Social Media Users – Global Demographics. 2023. Demandsage.com. URL: https://www.demandsage.com/social-media-users/ .

Skogen JC, Andersen AIO, Finserås TR, Ranganath P, Brunborg GS, Hjetland GJ. Commonly reported negative experiences on social media are associated with poor mental health and well-being among adolescents: results from the LifeOnSoMe-study. Front Public Health. 2023;11. https://doi.org/10.3389/fpubh.2023.1192788 .

Garner DM. Eating Disorder Inventory-2 manual Odessa, FL: Psychological Assessment Resources. 1991. https://doi.org/10.1002/1098-108X(198321)2:2%3C15::AID-EAT2260020203%3E3.0.CO;2-6

Sand L, Bøe T, Shafran R, Stormark KM, Hysing M. Perfectionism in adolescence: associations with gender, Age, and socioeconomic status in a Norwegian sample. Front Public Health. 2021;9. https://doi.org/10.3389/fpubh.2021.688811 .

Lampard AM, Byrne SM, McLean N, Fursland A. The eating disorder Inventory-2 perfectionism scale: factor structure and associations with dietary restraint and weight and shape concern in eating disorders. Eat Behav. 2012;13(1):49–53. https://doi.org/10.1016/j.eatbeh.2011.09.007 .

Engelsen BK, Laberg JC. A comparison of three questionnaires (EAT-12, EDI, and EDE-Q) for assessment of eating problems in healthy female adolescents. Nord J Psychiatry. 2001;55(2):129–35. https://doi.org/10.1080/08039480151108589 .

Rosenvinge JH, Perry JA, Bjørgum L, Bergersen TD, Silvera DH, Holte A. A new instrument measuring disturbed eating patterns in community populations: development and initial validation of a five-item scale (EDS-5). Eur Eat Disorders Rev. 2001;9(2):123–32. https://doi.org/10.1002/erv.371 .

Heradstveit O, Holmelid E, Klundby H, Søreide B, Sivertsen B, Sand L. Associations between symptoms of eating disturbance and frequency of physical activity in a non-clinical, population-based sample of adolescents. J Eat Disorders. 2019;7(1):9. https://doi.org/10.1186/s40337-019-0239-1 .

Goodman R. Psychometric properties of the strengths and difficulties questionnaire. J Am Acad Child Adolesc Psychiatry. 2001;40(11):1337–45. https://doi.org/10.1097/00004583-200111000-00015 .

Krokstad S, Langhammer A, Hveem K, Holmen TL, Midthjell K, Stene TR, Bratberg G, Heggland J, Holmen J. Cohort Profile: the HUNT Study, Norway. Int J Epidemiol. 2013;42(4):968–77. https://doi.org/10.1093/ije/dys095 .

Sivertsen B, Råkil H, Munkvik E, Lønning KJ. Cohort profile: the SHoT-study, a national health and well-being survey of Norwegian university students. BMJ Open. 2019;9(1):e025200. https://doi.org/10.1136/bmjopen-2018-025200 .

Strahan EJ, Wilson AE, Cressman KE, Buote VM. Comparing to perfection: how cultural norms for appearance affect social comparisons and self-image. Body Image. 2006;3(3):211–27. https://doi.org/10.1016/j.bodyim.2006.07.004 .

Hjetland GJ, Finserås TR, Skogen JC. «Hele verden et tastetrykk unna – Ungdommers bruk og opplevelser med sosiale medier og online gaming» [Worldwide in a keystroke – Adolescents’ use and experiences with social media and online gaming]. Bergen: Folkehelseinstituttet. 2022. https://www.fhi.no/publ/2022/hele-verden-er-et-tastetrykk-unna---ungdommers-bruk-og-opplevelser-med-sosi/ .

Fredrickson BL, Roberts T-A. Objectification Theory: toward understanding women’s lived experiences and Mental Health risks. Psychol Women Q. 1997;21(2):173–206. https://doi.org/10.1111/j.1471-6402.1997.tb00108.x .

Neumark-Sztainer D, Paxton SJ, Hannan PJ, Haines J, Story M. Does body satisfaction matter? Five-year Longitudinal associations between body satisfaction and Health behaviors in adolescent females and males. J Adolesc Health. 2006;39(2):244–51. https://doi.org/10.1016/j.jadohealth.2005.12.001 .

Compte EJ, Sepulveda AR, Torrente F. A two-stage epidemiological study of eating disorders and muscle dysmorphia in male university students in Buenos Aires. Int J Eat Disord. 2015;48(8):1092–101. https://doi.org/10.1002/eat.22448 .

Eisenberg ME, Wall M, Neumark-Sztainer D. Muscle-enhancing behaviors among adolescent girls and boys. Pediatrics. 2012;130(6):1019–26. https://doi.org/10.1542/peds.2012-0095 .

McLean SA, Wertheim EH, Masters J, Paxton SJ. A pilot evaluation of a social media literacy intervention to reduce risk factors for eating disorders. Int J Eat Disord. 2017;50(7):847–51. https://doi.org/10.1002/eat.22708 .

Gordon CS, Rodgers RF, Slater AE, McLean SA, Jarman HK, Paxton SJ. A cluster randomized controlled trial of the SoMe social media literacy body image and wellbeing program for adolescent boys and girls: study protocol. Body Image. 2020;33:27–37. https://doi.org/10.1016/j.bodyim.2020.02.003 .

Halliwell E. The impact of thin idealized media images on body satisfaction: does body appreciation protect women from negative effects? Body Image. 2013;10(4):509–14. https://doi.org/10.1016/j.bodyim.2013.07.004 .

Keutler M, McHugh L. Self-compassion buffers the effects of perfectionistic self-presentation on social media on wellbeing. J Context Behav Sci. 2022;23:53–8. https://doi.org/10.1016/j.jcbs.2021.11.006 .

Diefenbach S, Christoforakos L. The Selfie Paradox: nobody seems to like them yet everyone has reasons to take them. An exploration of psychological functions of Selfies in Self-Presentation. Front Psychol. 2017;8. https://doi.org/10.3389/fpsyg.2017.00007 .

Rousseau A. Adolescents’ selfie-activities and idealized online self-presentation: an application of the sociocultural model. Body Image. 2021;36:16–26. https://doi.org/10.1016/j.bodyim.2020.10.005 .

Striegel-Moore RH, Bulik CM. Risk factors for eating disorders. Am Psychol. 2007;62(3):181–98. https://doi.org/10.1037/0003-066x.62.3.181 .

Knudsen AK, Hotopf M, Skogen JC, Øverland S, Mykletun A. The health status of nonparticipants in a population-based health study: the Hordaland Health Study. Am J Epidemiol. 2010;172(11):1306–14. https://doi.org/10.1093/aje/kwq257 .

Verbeij T, Pouwels JL, Beyens I, Valkenburg PM. The accuracy and validity of self-reported social media use measures among adolescents. Computers Hum Behav Rep. 2021;3:100090. https://doi.org/10.1016/j.chbr.2021.100090 .

Download references

Acknowledgements

We are grateful to Bergen municipality and Vestland County Council for their collaboration on this study. The present study is linked to a larger innovation-project lead by Bergen municipality in Western Norway related to the use of social media and mental health and well-being. The innovation-project is funded by a program initiated by the Norwegian Directorate of Health and aims to explore social media as platforms for health promotion among adolescents. Above all, we are very thankful for the pupils who participated in this study.

The work of GJH was supported by Dam Foundation (grant number 2021/FO347287) while the work of JCS, AIOA, and TRF was supported by The Research Council of Norway (grant number 319845).

Open access funding provided by University of Bergen.

Author information

Authors and affiliations.

Department of Health Promotion, Norwegian Institute of Public Health, Bergen, Norway

Hilde Einarsdatter Danielsen, Turi Reiten Finserås, Amanda Iselin Olesen Andersen, Gunnhild Johnsen Hjetland & Jens Christoffer Skogen

Department of Clinical Psychology, University of Bergen, Bergen, Norway

Hilde Einarsdatter Danielsen & Vivian Woodfin

Centre for Evaluation of Public Health Measures, Norwegian Institute of Public Health, Oslo, Norway

Gunnhild Johnsen Hjetland & Jens Christoffer Skogen

Department of Clinical Psychology, Solli District Psychiatric Centre, Bergen, Norway

Vivian Woodfin

Center for Alcohol and Drug Research (KORFOR), Stavanger University Hospital, Stavanger, Norway

Jens Christoffer Skogen

You can also search for this author in PubMed   Google Scholar

Contributions

JCS analyzed the participants’ data regarding the LifeOnSoMe-study. All authors contributed to interpretation of the results. HED and JCS wrote the first draft of the manuscript. Additional contributions and revisions to the manuscript were made by TRF, AIOA, GJH, VW, JCS, and HED. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Hilde Einarsdatter Danielsen .

Ethics declarations

Ethics approval and consent to participate:

Institutional Review Board Statement

The study was conducted in accordance to the guidelines of the Declaration of Helsinki, and approved by the Regional Ethical Committee (REK) in Norway (REK#65611). All participants gave informed consent prior to participation, and was informed about the general purpose of the study and the opportunity to withdraw from the study at any point. As all the adolescents invited were 16 years or older, they were considered competent to consent on their own behalf, and additional consent from parents or guardians was not prerequisite.

Consent for publication

not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Danielsen, H.E., Finserås, T.R., Andersen, A.I.O. et al. Mirror, mirror on my screen: Focus on self-presentation on social media is associated with perfectionism and disordered eating among adolescents. Results from the “LifeOnSoMe”-study. BMC Public Health 24 , 2466 (2024). https://doi.org/10.1186/s12889-024-19317-9

Download citation

Received : 04 July 2023

Accepted : 01 July 2024

Published : 10 September 2024

DOI : https://doi.org/10.1186/s12889-024-19317-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Adolescents
  • Upward social comparison
  • Social media

BMC Public Health

ISSN: 1471-2458

paper presentation 2021

Category : Districts of Omsk Oblast

Federal subjects of the Russian Federation:

 
Instance of
Category contains
Authority file
 

Subcategories

This category has the following 35 subcategories, out of 35 total.

  • Coats of arms of districts of Omsk Oblast ‎ (54 F)
  • Flags of districts of Omsk Oblast ‎ (1 P, 49 F)
  • Locator maps of districts of Omsk Oblast ‎ (49 F)
  • Azovsky Nemetsky National District ‎ (1 C, 9 F)
  • Bolsherechensky District ‎ (1 C, 10 F)
  • Bolsheukovsky District ‎ (1 C, 5 F)
  • Cherlaksky District ‎ (4 C, 23 F)
  • Gorkovsky District ‎ (1 C, 5 F)
  • Isilkulsky District ‎ (3 C, 11 F)
  • Kalachinsky District ‎ (4 C, 5 F)
  • Kolosovsky District ‎ (1 C, 3 F)
  • Kormilovsky District ‎ (19 F)
  • Krutinsky District ‎ (1 C, 15 F)
  • Lyubinsky District ‎ (2 C, 9 F)
  • Maryanovsky District ‎ (1 C, 7 F)
  • Moskalensky District ‎ (2 C, 9 F)
  • Muromtsevsky District ‎ (2 C, 11 F)
  • Nazyvayevsky District ‎ (2 C, 5 F)
  • Nizhneomsky District ‎ (1 C, 6 F)
  • Novovarshavsky District ‎ (1 C, 5 F)
  • Odessky District ‎ (8 F)
  • Okoneshnikovsky District ‎ (1 C, 4 F)
  • Omsky District ‎ (5 C, 28 F)
  • Pavlogradsky District ‎ (9 F)
  • Poltavsky District ‎ (1 C, 4 F)
  • Russko-Polyansky District ‎ (2 C, 4 F)
  • Sargatsky District ‎ (1 C, 24 F)
  • Sedelnikovsky District ‎ (1 C, 6 F)
  • Sherbakulsky District ‎ (3 C, 7 F)
  • Tarsky District ‎ (2 C, 15 F)
  • Tavrichesky District ‎ (1 C, 5 F)
  • Tevrizsky District ‎ (1 C, 7 F)
  • Tyukalinsky District ‎ (2 C, 9 F)
  • Ust-Ishimsky District ‎ (3 C, 8 F)
  • Znamensky District, Omsk Oblast ‎ (1 C, 1 F)

Media in category "Districts of Omsk Oblast"

The following 24 files are in this category, out of 24 total.

paper presentation 2021

  • Geography of Omsk Oblast
  • Districts of Russia
  • Omsk Oblast
  • Uses of Wikidata Infobox
  • Uses of Wikidata Infobox with no topic

Navigation menu

COMMENTS

  1. Ten simple rules for effective presentation slides

    PLoS Comput Biol. 2021 Dec; 17(12): e1009554. Published online 2021 Dec 2. ... The "presentation slide" is the building block of all academic presentations, whether they are journal clubs, thesis committee meetings, short conference talks, or hour-long seminars. A slide is a single page projected on a screen, usually built on the premise of ...

  2. Paper Presentation Requirements for CVPR 2021

    Paper presentations at CVPR'21. All accepted papers should prepare a 5-minute video and poster PDF. In addition, all papers will have a "live session" where attendees can meet the authors via a video link to discuss the paper. See details below: (1) A 5-minute video presentation. Authors should prepare a 5-minute video presentation of their work.

  3. Ten simple rules for effective presentation slides

    Rule 2: Spend only 1 minute per slide. When you present your slide in the talk, it should take 1 minute or less to discuss. This rule is really helpful for planning purposes—a 20-minute presentation should have somewhere around 20 slides. Also, frequently giving your audience new information to feast on helps keep them engaged.

  4. Paper Presentation Requirements for ICCV 2021

    See the detailed requirements below. You must submit the following by September 30th, 2021: If your paper was accepted as an ORAL, a 12-minutes video presentation of your work. If your paper was accepted as a POSTER, a 5-minutes video presentation of your work. For BOTH types of papers, you also need to provide:

  5. Paper Presentation in an Academic Conference

    The key to an effective conference presentation lies in being well-prepared. Here are a few tips that will make the process smoother for you: 1. Write your paper with the audience in mind: A conference paper should be different from a journal article. Remember that your paper is meant to be heard, not read.

  6. Guide to a Successful Presentation

    We will update this page to reflect the specifications for CHI 2021 at least two months before the conference. DON'T give a presentation that will be comprehensible and interesting only to people who work in the same area as you. Please be aware that CHI is a multidisciplinary conference, with researchers and practitioners in attendance. In ...

  7. PharmaSUG 2021 Paper Presentations

    Paper presentations are the heart of a PharmaSUG conference. PharmaSUG 2021 will feature nearly 200 paper presentations, posters, and hands-on workshops. Papers are organized into 14 academic sections and cover a variety of topics and experience levels. Note: This information is subject to change.

  8. Home

    09/15 Paper Presentation Schedule can be found here 09/15 The Main Conference Schedule can be found here 09/13 Paper presentation guidelines are posted here ... March 10, 2021 (11:59PM Pacific Time) Paper Submission Deadline. March 17, 2021 (11:59PM Pacific Time) Supplementary Materials Deadline.

  9. How to Prepare a Paper Presentation?

    Most free paper presentations are 6 min in length. Careful preparation is important, to ensure that that the premise, findings, and relevance of your work are successfully conveyed in this short timeframe. Preparing your talk and preparing your slides go hand in hand and for simplicity are considered here together.

  10. Presentation Schedule

    Presentation Schedule. All posted times are EDT. When the virtual site is live, you will be able to select which sessions you are interested in and it will populate your own schedule. Paper Session 1A and 1B: Tuesday, October 12, 10:00 AM - 11:00 AM and Thursday, October 14, 5:00 PM - 6:00 PM. Paper Session 2A and 2B: Tuesday, October 12 ...

  11. Paper Presentations

    Paper Presentations. Tuesday, 5 October; Paper Session 1: Displays. 9:30-11:00 CEST UTC+2. Paper Session 2: Gestures & Hand. 9:30-11:00 CEST UTC+2. ... we conducted a literature review of publications between 2016 and 2021. Based on 44 relevant papers, we developed a comprehensive taxonomy focusing on two identified dimensions - task and ...

  12. Latest Technical Paper Presentation Topics

    The latest Technical Paper Presentation Topics include trending topics from emerging Technology like Artificial Intelligence, Machine Learning, 5G Technology, Cybersecurity, and Cloud Computing. ... 2021 at 6:56 PM Reply. work automation (can be delivery, operations, movement, robotics, AI/ML etc) Traffic control systems Communication/Data transfer

  13. PDF ESMO-Congress-2021-Proffered-Paper-and-Mini-Oral-Instructions

    Your PPT presentation must have been uploaded to the ESMO Congress 2021 platform by 16 August 2021, as your invited Discussant should also have time to prepare his/her own slides. It is vital that each Speaker keeps strictly to the time allocated for his/her presentation. Recordings running over the allocated time will regretfully have to be ...

  14. ICDCS 2021 » Full Paper Materials and Presentation Guidelines

    As announced before, ICDCS 2021 will be held virtually (via Whova), and we plan to include synchronous and asynchronous modes of communication during the conference. To facilitate the conference presentation and interactions, please submit: (1) your slides (in PDF format), and (2) a short video for your paper presentation, and submit BOTH ...

  15. 85+ Best Free Presentation Templates to Edit & Download

    46. Creative Brief Presentation. This creative brief presentation template can help you communicate your brand style and design requirements to video editors, graphic designers, creative agencies and freelancers. Swap the existing images, icons, text and colors for your own content and create a branded creative brief.

  16. Paper Presentation Instructions

    Therefore, all authors must present their papers online during the assigned session and time. We will announce the exact time slots and sessions for each author to present in an email no later than Sept. 17, 2021. The planning committee will accommodate each author's time zone as best as possible to limit inconvenient presentation times.

  17. Paper Presentation

    International Conference on Educational Measurement and Evaluation. "Assessment in the New Normal: Issues, Challenges, and Prospects". May 26-28, 2021. Virtual Conference via Microsoft Teams Live. Day 2, May 27. 10:45-11:45. Concurrent Session A1: Assessment of Learning during the COVID-19 Pandemic. Session Chair: Moderator:

  18. Easy & Attractive English Paper Presentation for BOARD EXAM 2021

    In this video I'm going to tell you about paper presentation how can you make your paper good looking in exams and as well as in other tests. In this video I...

  19. Best Paper Presentation Nominations

    The 2021 RISC-V Summit to Co-Locate with the 58th Design Automation Conference (DAC) in San Francisco. The 61st DAC Chips to Systems Conference Best Paper Awards. 2024 DAC, The Chips to Systems Conference: A Record-Breaking Year for Innovation and Participation. 62nd DAC, Chips to Systems Conference Reveals Executive Committee for the 2025 ...

  20. Mirror, mirror on my screen: Focus on self-presentation on social media

    Social media use, perfectionism, and disordered eating have all increased over the last decades. Some studies indicate that there is a relationship between self-presentation behaviors and being exposed to others' self-presentation on social media, and disordered eating. Studies also show that the relationship between focus on self-presentation and highly visual social media is stronger than ...

  21. Omsk Oblast

    Omsk Oblast (Russian: О́мская о́бласть, romanized: Omskaya oblast') is a federal subject of Russia (an oblast), located in southwestern Siberia.The oblast has an area of 139,700 square kilometers (53,900 sq mi). Its population is 1,977,665 (2010 Census) [9] with the majority, 1.12 million, living in Omsk, the administrative center.One of the Omsk streets

  22. Novovarshavsky District

    Novovarshavsky District ( Russian: Нововарша́вский райо́н) is an administrative [1] and municipal [5] district ( raion ), one of the thirty-two in Omsk Oblast, Russia. It is located in the southeast of the oblast. The area of the district is 2,200 square kilometers (850 sq mi). [citation needed] Its administrative center is ...

  23. Category:Districts of Omsk Oblast

    Federal cities: Moscow · Saint Petersburg ·. Autonomous oblast: Autonomous okrugs: Khantia-Mansia · Yamalo-Nenets. English: Districts of Omsk Oblast — in the Western Siberia region of northern Central Asian Russia. <nowiki>Categoria:Rajon dell'Oblast' di Omsk; بؤلمه:اومسک بؤلوملری; Kategória:Az Omszki terület járásai ...

  24. Siberian company song and dance from Omsk. Presentation

    Siberian company song and dance from Omsk. Presentation.Государственный Омский русский народный хор. Презентация (англ)