US20150074107A1 - Storing and serving images in memory boxes - Google Patents
Storing and serving images in memory boxes Download PDFInfo
- Publication number
- US20150074107A1 US20150074107A1 US14/539,709 US201414539709A US2015074107A1 US 20150074107 A1 US20150074107 A1 US 20150074107A1 US 201414539709 A US201414539709 A US 201414539709A US 2015074107 A1 US2015074107 A1 US 2015074107A1
- Authority
- US
- United States
- Prior art keywords
- images
- image
- computer
- request
- implemented method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G06F17/30244—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/51—Indexing; Data structures therefor; Storage structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5838—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/583—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
- G06F16/5854—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using shape and object relationship
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/5866—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/50—Information retrieval; Database structures therefor; File system structures therefor of still image data
- G06F16/58—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
- G06F16/587—Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using geographical or spatial information, e.g. location
-
- G06F17/30345—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/167—Detection; Localisation; Normalisation using comparisons between temporally consecutive images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/765—Interface circuits between an apparatus for recording and another apparatus
- H04N5/77—Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/179—Human faces, e.g. facial parts, sketches or expressions metadata assisted face recognition
Definitions
- This invention pertains in general to online storage and management of images and in particular to methods of displaying collections of stored images.
- Digital cameras have become a widespread tool for capturing photographs and videos. As the popularity of digital images has grown, it has become common for users to store sometimes massive collections of many thousands of images through online image hosting services such as FLICKR®, PHOTOBUCKET®, VIMEO®, or YOUTUBE®. Such services are limited in the interfaces they provide to users for organizing and viewing their images. Also, the convenience of users is impacted by the speed at which users can access and view their images.
- Embodiments of the invention provide methods of quickly and efficiently serving images and related data from an image server to a client device and conserving bandwidth consumed by updating the images and related data on the client device.
- a unique identifier for the image and parameters associated with the image are converted into a binary string.
- the relevant information can be conveyed in an efficient package and the overall size of the data conveyed is much smaller.
- a timestamp is set, and all requested images are served to the client with the respective binary strings. The served images and binary strings are then locally cached by the client.
- Updates to the stored images and associated parameters are recorded by the image server along with a respective time of each update. Then, when the image server receives the next request from the client device for images, all the updates since the previously set timestamp are served to the client device, and a new timestamp is set. In this way, only the changes to the images and associated parameters since the last request for images are communicated to the client device, rather than serving the entire set of images with their respective binary strings again.
- Other embodiments of the invention provide methods of laying out images based on time in a plurality of rows for display within a user interface.
- the next image in order by time is found by reference to the timestamps associated with each image in the images to be laid out.
- the shortest row of the plurality of rows is determined based on an accumulated length of each row. For example, the accumulated length of each row is the sum of the widths of the images in the row plus the sum of the blank space between each of the images in the row.
- the image is placed in the shortest row.
- the method can be iterated until there are no more images to layout based on time, or at least until a sufficient number of images have been placed in the plurality of rows to fill a screen of the client device that is used to display the layout.
- inventions include systems and a non-transitory computer-readable storage media for serving images and related data and for laying out images for display according to the techniques described above.
- FIG. 1 is a network diagram of a system environment for storing images on an image server, in accordance with an embodiment of the invention.
- FIG. 2 is block diagram of the image server depicted in FIG. 1 , in accordance with an embodiment of the invention.
- FIG. 3 is a block diagram illustrating an example data structure for an image, in accordance with an embodiment of the invention.
- FIG. 4 is a block diagram illustrating an example data structure for a memory box, in accordance with an embodiment of the invention.
- FIG. 5 is a flow chart illustrating a method of serving images, in accordance with an embodiment of the invention.
- FIG. 6 is a flow chart illustrating a method for laying out images based on time in a plurality of rows, in accordance with an embodiment of the invention.
- FIG. 7 is a prior art example of a layout of images in a grid pattern.
- FIG. 8 is an example of a non-grid layout of images in a plurality of rows, in accordance with an embodiment of the invention.
- FIG. 9A is another example of a non-grid layout of images in a plurality of rows, including some images organized into stacks, in accordance with an embodiment of the invention.
- FIG. 9B is another example of a non-grid layout of images in a plurality of rows, with a time line displayed, in accordance with an embodiment of the invention.
- FIG. 10 is a block diagram of the components of a computing system for use, for example, as the image server or the client devices depicted in FIG. 1 , in accordance with an embodiment of the invention.
- Embodiments of the invention provide methods of quickly and efficiently serving images and related data from an image server to a client device and conserving bandwidth consumed by updating the images and related data on the client device.
- Other embodiments of the invention provide methods of laying out images based on time in a plurality of rows for display within a user interface.
- These embodiments of the invention can operate in the context of a system environment 100 illustrated in FIG. 1 .
- the system environment 100 includes an image server 110 , a network 120 , and client devices 130 A, 130 B, and 130 C (collectively 130 ). Although only three client devices 130 are shown in FIG. 1 in order to clarify and simplify the description, a typical embodiment of the system environment 100 may include thousands or millions of client devices 130 connected to image server 110 over the network 120 .
- the image server 110 receives images from the client devices 130 and performs a wide variety of tasks related to storing and sharing the images. After a user contributes images to the image server 110 , the contributing user can interact with the image server 110 to share the images with other users, organize the images into memory boxes, identify and tag people in the images, and perform many other tasks. In addition, the image server 110 can analyze the metadata of contributed images to find related images and perform facial recognition to automatically identify and tag recognized people in images. A detailed description of the image server 110 is provided with reference to FIG. 2 .
- the network 120 relays communications between the image server 110 and the client devices 130 .
- the network 120 uses standard Internet communications technologies and/or protocols.
- the network 120 can include link technologies such as Ethernet, IEEE 802.11, IEEE 806.16, WiMAX, 3GPP LTE, integrated services digital network (ISDN), asynchronous transfer mode (ATM), and the like.
- the networking protocols used on the network 120 can include the transmission control protocol/Internet protocol (TCP/IP), the hypertext transport protocol (HTTP), the simple mail transfer protocol (STMP), the file transfer protocol (FTP), and the like.
- TCP/IP transmission control protocol/Internet protocol
- HTTP hypertext transport protocol
- STMP simple mail transfer protocol
- FTP file transfer protocol
- the data exchanged over the network 120 can be represented using a variety of technologies and/or formats including the hypertext markup language (HTML), the extensible markup language (XML), etc.
- all or some links can be encrypted using conventional encryption technologies such as the secure sockets layer (SSL), Secure HTTP (HTTPS) and/or virtual private networks (VPNs).
- SSL secure sockets layer
- HTTPS Secure HTTP
- VPNs virtual private networks
- the entities can use custom and/or dedicated data communications technologies instead of, or in addition to, the ones described above.
- the client devices 130 are electronic devices that are capable of communicating with the image server 110 over the network 120 .
- a client device 130 may be a smartphone, a personal digital assistant (PDA), a tablet computer, a laptop computer, or a desktop computer.
- PDA personal digital assistant
- a client device 130 may optionally include an integrated camera so that the device can be used to upload an image to the image server 110 directly after capturing the image.
- the client devices 130 are further used to display collections of images served to client device 130 by the image server 110 .
- a single user may use multiple client devices 130 to interact with the image server 110 using the same user account.
- a user can use a first client device 130 A (e.g., a smartphone) to capture an image, and upload the image to the image server 110 using his or her user account.
- a second client device 130 B e.g., a desktop computer
- FIG. 2 is block diagram of the image server 110 depicted in FIG. 1 , in accordance with an embodiment of the invention.
- the image server 110 includes an interface module 202 , a user account database 206 , a metadata analysis module 208 , an image management module 210 , an image store 212 , a memory box management module 214 , a memory box database 216 , a stack creation module 218 , and a time line layout module 220 .
- the interface module 202 handles communications between the image server 110 and the client devices 130 via the network 120 .
- the interface module 202 receives communications from users, such as uploaded images and requests to share images and memory boxes, and passes the communications to the appropriate modules.
- the interface module 202 also receives outgoing data, such as images and notifications, from the other modules and sends the data to the appropriate client devices 130 , for example to display a user's images in a user interface display on a client device 130 , such as the user interface depicted in FIG. 9 .
- the user account database 206 stores user accounts that contain information associated with users of the image server 110 .
- each user account contains a unique identifier for the account, at least one piece of contact information for the corresponding user (e.g., an email address), and links to the user's accounts on external social networking services (e.g., FACEBOOK® or TWITTER®).
- a user account can also be a shared account that contains contact information, or social networking links corresponding to two users. Shared accounts allow two people (e.g., a married couple, close friends, or two people with some other relationship) to access the image server 110 using the same user account while respecting that the two people are individuals with different identities. Since a user account is likely to contain sensitive data, the user account database 206 may be encrypted or subject to other data security techniques to protect the privacy of the users.
- the metadata analysis module 208 receives images and analyzes the metadata in the images to find related images in the image store 212 .
- the metadata analysis module 208 contains submodules that attempt to match individual types of metadata.
- the module 208 may contain a subject matching submodule for finding other images that include the same people, a timestamp analysis submodule for finding other images that were taken at approximately the same time, and a location analysis submodule for finding other images that were taken at approximately the same location.
- the module may also contain submodules for matching different types of metadata. The metadata analysis module 208 can then aggregate the results from the submodules to generate a list of related images.
- the image management module 210 manages the images in the image store 212 .
- an image is a media item that contains visual content, such as a photograph or a video captured by a user, and at least one item of metadata that describes the visual content.
- a detailed description of an example image 300 is presented below with reference to FIG. 3 .
- the image management module 210 can modify an item of metadata in an image after receiving a corresponding request from a user (via the interface module 202 ) or from a different module of the image server 110 .
- the image management module 210 can also modify the visual content of an image. For example, the image management module 210 may scale images down, for example to a predefined maximum width.
- the image management module 210 also processes requests to add or remove images from the image store 212 .
- An image is typically added to the image store 212 after a user uploads the image to the image server 110 .
- the image management module 210 may also create a copy of an image that was previously added to the image store 212 .
- the image management module 210 can remove an image from the image store 212 after receiving a request from the user who uploaded the image.
- the memory box management module 214 creates and deletes memory boxes in the memory box database 216 .
- a memory box is an object that defines a collection of images in the image store 212 , and a detailed description of an example memory box is presented below with reference to FIG. 4 .
- the memory box management module 214 automatically creates a primary memory box, and all of the user's uploaded images are added to the primary memory box by default.
- the memory box management module 214 can also create additional memory boxes for a user account.
- a user may submit requests to create additional memory boxes as a way of categorizing and organizing their images on the image server 110 . For example, a user might wish to create separate memory boxes for images of each of her children. Users may also submit requests to delete memory boxes that they have created, and these requests are also processed by the memory box management module 214 .
- the memory box management module 214 also receives and processes requests to share memory boxes. After receiving a sharing request, either from the interface module 202 or a different module of the image server 110 , the memory box management module 214 accesses the requested memory box in the memory box database 216 and makes the requested change to the sharing settings of the requested memory box.
- the stack creation module 218 can create and maintain stacks of related images for display to a user within a user interface.
- the stacks visually organize the display of images to a user in a user interface so that the user's display is not overly cluttered with many related images. Instead, by using stacks, the related images can appear as a vertical stack of images having a cover image, and presenting a visual indication that distinguishes a stack from a single image, for example, by showing the border of several images slightly offset and behind the cover image to present the illusion of depth of a stack within the user interface. Examples of stacks are shown in FIG. 9A , such as stack 909 .
- the stacks may be manually initiated by a user by selecting images to be grouped into a stack, for example by dragging and dropping an image on top of one or more other images from within a user interface, or the stacks may be automatically or semi-automatically created by the stack creation module 218 through analysis of the metadata 304 or image data 302 .
- the images may be grouped into a stack by time, such as all images having a timestamp within a particular time window, wherein the time window may be user configurable to be short (e.g., on the order of a few seconds) or long (e.g., a day or a week).
- the images may be grouped into a stack by image similarity, such as all images having similar image content in the image data 302 (e.g., similar group of people, similar pose, similar background, etc.) as determined through analysis by a third-party image comparison tool (not shown), or as determined based on the images being from a similar location and/or a perceptual match of the images.
- the stack creation module 218 may also determine which of a group of stacked images should be chosen as the cover image for the stack, for example, by determining which image of the group is the highest quality.
- the highest quality image may be determined to be the image showing the face of a person that appears frequently in the user's images at different times and locations (an “important person” to the user), and where the important person's eyes are open, is smiling, etc.
- the stack creation module 218 may also update metadata 304 corresponding to each image to indicate that it is part of a stack, for example by assigning a unique stack identifier to each stack, and recording the respective stack identifier in the metadata 304 of the images in the stack, and whether the image is the cover image of the respective stack.
- the time line layout module 220 may be part of the image server 110 , or in other embodiments, may reside in an application on the client devices 130 . Regardless of its physical location, the time line layout module 220 formats the display of images according to time. Specifically, the time line layout module 220 may practice the method illustrated in FIG. 6 to layout images in a plurality of rows, which results in a layout such as shown in the example of FIG. 8 , both of which are discussed in greater detail below.
- FIG. 3 is a block diagram illustrating an example data structure for an image 300 such as an image from the image store 212 , in accordance with an embodiment of the invention.
- the image 300 contains image data 302 and metadata 304 .
- the image data 302 is the visual content of the image 300 . As described with reference to the image store 212 of FIG. 2 , the image data 302 may be a photograph or a video.
- the image data 302 may be compressed using any combination of lossless or lossy compression methods known in the art, such as run-length encoding, entropy encoding, chroma subsampling, or transform coding.
- the image data 302 may also include a stored perceptual value for images, such as a perceptual hash, for use in finding pixel-similar images. The stored perceptual data is used to find pixel-based similarities to determine if two images are duplicates or near duplicates.
- the metadata 304 includes a contributor account identifier 306 , sharing settings 308 , location data 310 , a timestamp 312 , an activity 314 , and tags of recognized people 316 .
- the metadata 304 may include additional or different information that is not explicitly shown in FIG. 3 , such as identifying information for the camera that was used to capture the image data 302 , the optical settings that were used to capture the image data 302 (e.g., shutter speed, focal length, f-number), the resolution of the image data 302 , or a caption for the image 300 .
- the contributor account identifier 306 identifies the user account that was used to upload the image 300 to the image server 110 .
- the contributor account identifier 306 is the unique account identifier described with reference to the user account store 206 of FIG. 2 .
- the contributor account identifier 306 may be an item of contact information corresponding to the contributor account or some other piece of identifying information.
- the sharing settings 308 is a list of identifiers for additional user accounts and sharing privileges that have been given to each additional user account. Sharing privileges specify the level of access that the contributing user has granted to the additional user accounts. For example, a first user account may only be allowed to view the image, whereas a second user may be allowed to view the image and add tags for additional recognized people 316 .
- the sharing settings 308 may be used to specify a different set of sharing privileges for each additional user account, and each set of sharing privileges specifies which items of metadata 304 the user account is allowed to change. Defining sharing settings 308 in the metadata 304 of an image 300 beneficially allows individual images 300 to be shared between users.
- the sharing settings 308 are omitted, and the sharing privileges granted to users are instead stored in association with the corresponding user accounts.
- each user account would include a list of identifiers for images 300 that have been shared with the user account, and the sharing privileges that have been granted to the user account for each image 300 would be stored in the user account in association with the corresponding image identifier.
- a user may save a caption or some other sort of user-specific annotation in association with an image identifier in his user profile. Saving captions and other user-specific annotations in the corresponding user profiles beneficially allows multiple users to assign different annotations to the same shared image 300 .
- the location data 310 is information that identifies where the image 300 was taken.
- the location data 310 may include, for example, coordinates from a global navigation satellite system (GNSS) which are retrieved and recorded by the camera at the time the image 300 is taken. Alternatively, a user may manually add GNSS coordinates to an image at some point after the image 300 is taken.
- GNSS global navigation satellite system
- the location data 310 may also contain a textual location descriptor that provides a user-readable label for where the image 300 was taken. For example, the location descriptor may be “Home,” “Soccer Field,” or “San Francisco.” A user may manually add a location descriptor.
- the location of an image can be determined in some circumstances based on the IP address of the device used to upload the image at the time of capture of the image. For example, if a user uploads an image from a smart phone at the time of the capture, but the uploaded image does not contain geo-data, the IP address of the user's device at the time the user uploaded the image can be used to estimate the location of the image.
- the timestamp 312 is the date and time at which the image data 302 was captured.
- the timestamp 312 may be retrieved from an internal clock of the camera and recorded at the time the image 300 is taken, or it may be manually added or modified by the user after the image 300 is taken.
- the timestamp 312 can be used for ordering images by the time that they were captured on a time line for display to a user, as will be described in greater detail with reference to FIG. 6 .
- timestamps can also be recorded when any changes to the image data 302 or the metadata 304 is stored. These timestamps can be useful in synchronizing data that has been communicated to a client device, as will be described in greater detail with reference to FIG. 5 .
- the activity 314 identifies an event at which the image data 302 was captured (e.g., “soccer game,” “summer vacation,” “birthday party,” “high school graduation,” etc.) or an action in which the people in the image are engaged (e.g., “playing soccer,” “swimming,” “eating cake,” “graduating from high school,” etc.).
- a user may manually define the activity 314 based on pre-existing knowledge of the context in which the image 300 was taken.
- the metadata analysis module 208 may instead suggest or assign the activity 314 recorded for other images having similar timestamps to be the activity for the image 300 .
- the metadata analysis module 208 may suggest or assign the activity 314 “playing soccer” as the activity 314 of the image 300 .
- the tags for recognized people 316 identify people who are shown in the image data.
- the tags 316 may be manually added by the user, automatically added by the facial recognition module (not shown), or added based on a combination of automatic facial recognition and user input.
- the tags 316 are links to a corresponding facial recognition model in a facial recognition model storage.
- each tag 316 may simply specify a name for the recognized person.
- the metadata 304 is used in a multi-step process for determining if two images are duplicates, even if they are in a different format, size, or encoding.
- the metadata 304 of the images are considered, including file name, date, size, and source.
- an MD5 Hash of each image is created.
- a perceptual match of the image is performed to determine if it is in face the same image as perceived by the user, but happens to be reformatted, resized, or in a different encoding.
- a similar process can be performed, but for a pair of videos with different encoding, formats, and resolutions, rather than the MD5 hash match, a perceptual match is performed instead.
- a sampling of static frames from the same points in each of the videos is extracted and compared using perceptual pixel matching.
- the recognition of duplicates or near duplicates is useful so that rotations and tagging applied to one of a group of highly similar images can be automatically applied to the others.
- the metadata analysis module 220 may suggest to the user or automatically apply the same adjustments to the other images.
- FIG. 4 is a block diagram illustrating an example data structure for implementing a memory box 400 from the memory box database 216 , in accordance with an embodiment of the invention.
- the example data structure includes links to images 402 , a contributor account identifier 404 , and sharing settings 406 .
- the links to images 402 identify images 300 in the image store 212 that are in the memory box 400 .
- an image 300 is ‘added’ to the memory box 400 when a link to the image 300 is added to the image links 402 , and an image 300 is ‘removed’ from the memory box 400 when the corresponding link is removed.
- the memory box 400 may hold a copy of the images instead.
- using links 402 is beneficial because the same image can be used in multiple memory boxes without being copied multiple times. This reduces the amount of storage space that is used on the image server 110 and maintains a single set of metadata 304 for the image.
- the contributor account identifier 404 identifies the user account for which the memory box 400 was created. Similar to the contributor account identifier 306 described with reference to the image 300 of FIG. 3 , the contributor account identifier 404 in a memory box 400 may be the unique account identifier described with reference to the user account store 206 of FIG. 2 . Alternatively, the contributor account identifier 404 may be an item of contact information corresponding to the contributor account or some other piece of identifying information. The user account identified in the contributor account identifier 404 has full access privileges to the memory box 404 and is able to add and remove image links 402 and modify the sharing settings 406 of the memory box 400 .
- the sharing settings 406 for the memory box 400 is a list of identifiers for additional user accounts and a set of sharing privileges that have been given to each additional user account.
- a set of sharing privileges defines the specific read/write privileges of an additional user.
- a set of sharing privileges may specify whether a user is allowed to add images, remove images, or modify images in the memory box 400 (e.g., by adding tags for recognized people or modifying other metadata).
- a set of sharing privileges may also specify whether a user is allowed to view every image in the memory box or only a subset of the images in the memory box 400 . If the user is only allowed to view a subset of the images, then the sharing privileges may define the subset. Alternatively, the subset of images may be predefined (e.g., images that were taken at a particular location, on a particular date, etc).
- each user account in the user account database 206 would include a list of identifiers for memory boxes 400 that have been shared with the user account and the corresponding sharing privileges that have been granted.
- FIG. 5 is a flow chart illustrating a method of serving images for display on a client device 130 , in accordance with an embodiment of the invention.
- the method can be advantageously used to speed the delivery of images and related data over a network 120 for display on a client device 130 , and to conserve the amount of bandwidth consumed by updating the images and related data on the client device 130 .
- a unique identifier for the image and parameters associated with the image are converted into a binary string.
- parameters may include parameters from the metadata 304 , and/or a variety of other parameters. Specific examples of parameters include: whether the image is from a photo or video, whether the image is enhanced or not, what the source of the image is, whether the image is rotated or not, whether the image is stacked or not, whether it is the cover image of a stack or not, the timestamp from when the image was created, the timestamp from when the image was imported to the image server 110 , etc.
- the relevant information can be conveyed in an efficient package and the overall size of the data conveyed is much smaller.
- the binary string is represented in hexadecimal notation to make the string more compact.
- a timestamp is set. This timestamp marks the time at which the client request was received.
- the client request may specify only a subset of images in a memory box 400 through the user's account at the image server 110 that the user wants to view.
- the client request may be as a result of a query for images that satisfy a certain metadata criterion or combinations of metadata criteria, such as the people in the images (recognized people 316 ) the location in which the images were taken (location data 310 ), and/or the activity shown in the images (activity 314 ).
- the client request may specify only images from within a particular timeframe that a user wants to view on a time line, or the starting timeframe may be set by default (e.g., a default of images from this month, images from this year, etc.).
- step 503 all requested images are served to the client with the respective binary strings.
- the compactness of the binary string reduces the amount of data that is transferred over the network 120 when the requested images are served from the image server 110 to the client device 130
- the amount of data in the binary string is dwarfed by the size of the image data 302 for the requested images.
- the requested images and the respective binary strings received from the image server 110 are stored in local cache on the client device 130 .
- step 504 updates to the stored images and the associated parameters are recorded by the image server 110 , including the respective time of the update.
- the updates can include new images uploaded to the image server 110 , or changes to metadata of the images.
- step 505 responsive to the next received request from the client device 130 for the display of images, all the updates since the previously set timestamp are served to the client device 130 , and a new timestamp is set.
- a new timestamp is set.
- steps 504 and 505 can be iterated as additional updates are recorded by the image server 110 , and subsequent requests for the display of images are received. Each time, only the updates since the previous timestamp need to be served in order to ensure that the client device 130 has the most current version of the images and associated parameters available from the user's account.
- FIG. 6 is a flow chart illustrating a method for laying out images based on time in a plurality of rows for display within a user interface, in accordance with an embodiment of the invention.
- the method illustrated in FIG. 6 is performed by a time line layout module 220 of an image server 110 .
- the method illustrated in FIG. 6 may be performed by a similar module residing on a client device 130 . It is assumed that the module performing the lay out has access to the images and a respective timestamp for each image that indicates when the image was created.
- the time line layout module 220 finds the next image in order by time by referencing the timestamps of the images. For convenience, all images to be laid out may be first ordered by their timestamps so that they are queued for quick placement. Otherwise, the next image in order by time may be identified from the remaining images to be laid out each time step 601 is iterated.
- the next image may be the closest image in time to a user-provided or default first end of the time line. However, the first end of the time line can either be the most recent end of the timeline or the oldest end of the timeline. In other words, the time line can be built from a point in time forward in time or from a point in time backward in time, depending on the user's preferences.
- time line can be built horizontally or vertically, depending on the user's preferences.
- this description will refer to a horizontally oriented timeline.
- This description will use the term “row” to refer to horizontally oriented rows for a horizontally oriented time line and also use the term “row” to refer to vertically oriented columns in a vertically oriented time line.
- the shortest row of a plurality of rows is determined based on an accumulated length of each row.
- all rows may be zero in length, but as images are added to the rows, they will grow in length.
- each row will grow in lateral extent by the width of an added image plus the width of a border or blank space between the previous image in the row and the added image.
- each row will grow in vertical extent by the height of an added image plus the height of a border or blank space between the previous image in the row.
- the accumulated length of each row is the sum of the widths of the images in the row plus the sum of the blank space between each of the images in the row.
- the tie may be broken in favor of the row closest to the top of the user interface for horizontally oriented timelines or closest to the left of the user interface for vertically oriented timelines.
- step 603 the image is placed in the shortest row.
- the placement of the image in the row may be spaced so as to maintain a border or blank space between the previous image in the row and the newly placed image, and the borders or blank spaces between each image in the various rows may be approximately the same to provide a cohesive and visually pleasing layout.
- steps 601 - 603 can be iterated until there are no more images to layout based on time, or at least until a sufficient number of images have been placed in the plurality of rows to fill a screen of the client device 130 that is used to display the layout. Because the images vary in dimensions, the resulting visual display of the images in the user interface based on the layout method of FIG. 6 does not resemble grids that are commonly used in the prior art.
- An example of a prior art grid layout of 15 images is shown in FIG. 7 . In this example, image 15 is the newest and image 1 is the oldest.
- FIG. 8 is an example of a non-grid layout of images in a plurality of rows, in accordance with an embodiment of the invention. This example applies the method of FIG. 6 to lay out the same 15 images from FIG. 7 into three horizontally oriented rows 801 , 802 , 803 , wherein the border or blank space 808 between each image in each of the rows is approximately the same.
- the method places the images 15-8 into the rows 801 , 802 , 803 (moving backward in time while reading from right to left), when it is time to place image 7 into a row, it is noted that the accumulated length of row 801 (comprising the width of image 15, the width of image 12, the width of image 9, and two blank spaces between the images) is shorter than the accumulated length of row 802 (comprising the width of image 14, the width of image 11, the width of image 8, and two blank spaces between the images) and shorter than the accumulated length of row 803 (comprising the width of image 13, the width of image 10, and one blank space between the images).
- image 7 is placed into row 801 .
- image 6 is placed into row 803 because 803 became the new shortest row with the placement of image 7 in row 801 , and so on until all of the images to be displayed are laid out.
- the method only lays out images that the user can currently view on the screen of the client device 130 , and waits for the user to scroll in either direction to lay out further images in that direction.
- the method lays out images that the user can currently view on the screen of the client device 130 plus an off-screen margin on either side of the current view, so that small changes of the view in either direction (e.g., by the user scrolling slightly to the left or right) can be accommodated without need to perform further layout of images.
- FIG. 9A is another example of a non-grid layout of images in a plurality of rows 901 - 903 , including some images organized into stacks (e.g., stack 909 ), in accordance with an embodiment of the invention.
- the example of FIG. 9A includes both photographs (e.g., photograph 991 ) and videos (e.g., video 992 ) organized by time.
- FIG. 9B is another example of a non-grid layout of images in a plurality of rows 904 - 905 , with a time line displayed 910 between the rows 904 and 905 .
- the markers on the time line 910 indicate the date that the associated images were captured.
- all images captured on the same day are grouped together, for example by including one image from the day in a larger size (e.g., large image 911 taken Feb. 19, 2012) and the remaining images from the same day as a series of images in smaller size (e.g., small images 912 , 913 ) in a grid pattern next to the larger image, in accordance with an embodiment of the invention.
- the non-grid layout of the larger images in the two rows 904 and 905 above and below the time line 910 may follow the technique described above with reference to FIG. 6 .
- FIG. 10 is a block diagram of the components of a computing system 1000 for use, for example, as the image server 110 or client devices 130 depicted in FIG. 1 , in accordance with an embodiment of the invention. Illustrated are at least one processor 1002 coupled to a chipset 1004 . Also coupled to the chipset 804 are a memory 1006 , a storage device 1008 , a keyboard 1010 , a graphics adapter 1012 , a pointing device 1014 , a network adapter 1016 , and a camera 1024 . A display 1018 is coupled to the graphics adapter 1012 . In one embodiment, the functionality of the chipset 1004 is provided by a memory controller hub 1020 and an I/O controller hub 1022 . In another embodiment, the memory 1006 is coupled directly to the processor 1002 instead of the chipset 1004 .
- the storage device 1008 is any non-transitory computer-readable storage medium, such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device.
- the memory 1006 holds instructions and data used by the processor 1002 .
- the pointing device 1014 may be a mouse, track ball, or other type of pointing device, and is used in combination with the keyboard 1010 to input data into the computer 1000 .
- the graphics adapter 1012 displays images and other information on the display 1018 .
- the network adapter 1016 couples the computer 1000 to a network.
- the camera 1024 captures digital photographs and videos.
- the camera 1024 includes an image sensor and an optical system (e.g., one or more lenses and a diaphragm), in addition to other components.
- the camera 1024 may also include a microphone for capturing audio data, either as standalone audio or in conjunction with video data captured by the image sensor.
- the camera 1024 is a separate computing device with its own processor, storage medium, and memory (e.g., a point-and-shoot or DSLR camera), and the camera 1024 is coupled to the I/O controller hub through an external connection (e.g., USB).
- the camera 024 may be a component of the computing system 1000 (e.g., an integrated camera in a smart phone, tablet computer, or PDA).
- a computer 1000 can have different and/or other components than those shown in FIG. 10 .
- the computer 1000 can lack certain illustrated components.
- a computer 1000 acting as a server may lack a keyboard 1010 , pointing device 1014 , graphics adapter 1012 , display 1018 , and/or camera 1024 .
- the storage device 1008 can be local and/or remote from the computer 1000 (such as embodied within a storage area network (SAN)).
- SAN storage area network
- the computer 1000 is adapted to execute computer program modules for providing functionality described herein.
- module refers to computer program logic utilized to provide the specified functionality.
- a module can be implemented in hardware, firmware, and/or software.
- program modules are stored on the storage device 1008 , loaded into the memory 1006 , and executed by the processor 1002 .
- Embodiments of the physical components described herein can include other and/or different modules than the ones described here.
- the functionality attributed to the modules can be performed by other or different modules in other embodiments.
- this description occasionally omits the term “module” for purposes of clarity and convenience.
- the present invention also relates to an apparatus for performing the operations herein.
- This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored on a computer readable medium that can be accessed by the computer.
- a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of computer-readable storage medium suitable for storing electronic instructions, and each coupled to a computer system bus.
- the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
- any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment.
- the appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
- the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion.
- a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus.
- “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
Abstract
Images and related data are served from an image server to a client device. A unique identifier and associated parameters for each image are converted into a binary string. Responsive to a request for images, a timestamp is set, and images are served with the respective binary strings. Updates are recorded by the image server with a respective time of each update. When the image server receives the next request for images, all the updates since the previously set timestamp are served, and a new timestamp is set. Other embodiments provide methods of laying out images based on time in a plurality of rows for display within a user interface. The next image in order by time is found by reference to the timestamps associated with each image. Then, the shortest row of the plurality of rows is determined, and the image is placed in the shortest row.
Description
- 1. Technical Field
- This invention pertains in general to online storage and management of images and in particular to methods of displaying collections of stored images.
- 2. Description of Related Art
- Digital cameras have become a widespread tool for capturing photographs and videos. As the popularity of digital images has grown, it has become common for users to store sometimes massive collections of many thousands of images through online image hosting services such as FLICKR®, PHOTOBUCKET®, VIMEO®, or YOUTUBE®. Such services are limited in the interfaces they provide to users for organizing and viewing their images. Also, the convenience of users is impacted by the speed at which users can access and view their images.
- Embodiments of the invention provide methods of quickly and efficiently serving images and related data from an image server to a client device and conserving bandwidth consumed by updating the images and related data on the client device. For each stored image on the image server, a unique identifier for the image and parameters associated with the image are converted into a binary string. By converting the unique identifier for the image and the parameters associated with the image into a binary string, the relevant information can be conveyed in an efficient package and the overall size of the data conveyed is much smaller. Responsive to a received request from a client device for images, a timestamp is set, and all requested images are served to the client with the respective binary strings. The served images and binary strings are then locally cached by the client. Updates to the stored images and associated parameters are recorded by the image server along with a respective time of each update. Then, when the image server receives the next request from the client device for images, all the updates since the previously set timestamp are served to the client device, and a new timestamp is set. In this way, only the changes to the images and associated parameters since the last request for images are communicated to the client device, rather than serving the entire set of images with their respective binary strings again.
- Other embodiments of the invention provide methods of laying out images based on time in a plurality of rows for display within a user interface. First, the next image in order by time is found by reference to the timestamps associated with each image in the images to be laid out. Next, the shortest row of the plurality of rows is determined based on an accumulated length of each row. For example, the accumulated length of each row is the sum of the widths of the images in the row plus the sum of the blank space between each of the images in the row. Then, the image is placed in the shortest row. The method can be iterated until there are no more images to layout based on time, or at least until a sufficient number of images have been placed in the plurality of rows to fill a screen of the client device that is used to display the layout.
- Other embodiments of the invention include systems and a non-transitory computer-readable storage media for serving images and related data and for laying out images for display according to the techniques described above.
- The features and advantages described in the specification are not all inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings, specification, and claims. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter.
-
FIG. 1 is a network diagram of a system environment for storing images on an image server, in accordance with an embodiment of the invention. -
FIG. 2 is block diagram of the image server depicted inFIG. 1 , in accordance with an embodiment of the invention. -
FIG. 3 is a block diagram illustrating an example data structure for an image, in accordance with an embodiment of the invention. -
FIG. 4 is a block diagram illustrating an example data structure for a memory box, in accordance with an embodiment of the invention. -
FIG. 5 is a flow chart illustrating a method of serving images, in accordance with an embodiment of the invention. -
FIG. 6 is a flow chart illustrating a method for laying out images based on time in a plurality of rows, in accordance with an embodiment of the invention. -
FIG. 7 is a prior art example of a layout of images in a grid pattern. -
FIG. 8 is an example of a non-grid layout of images in a plurality of rows, in accordance with an embodiment of the invention. -
FIG. 9A is another example of a non-grid layout of images in a plurality of rows, including some images organized into stacks, in accordance with an embodiment of the invention. -
FIG. 9B is another example of a non-grid layout of images in a plurality of rows, with a time line displayed, in accordance with an embodiment of the invention. -
FIG. 10 is a block diagram of the components of a computing system for use, for example, as the image server or the client devices depicted inFIG. 1 , in accordance with an embodiment of the invention. - The figures depict embodiments of the present invention for purposes of illustration only. One skilled in the art will readily recognize from the following description that alternative embodiments of the structures and methods illustrated herein may be employed without departing from the principles of the invention described herein.
- Embodiments of the invention provide methods of quickly and efficiently serving images and related data from an image server to a client device and conserving bandwidth consumed by updating the images and related data on the client device. Other embodiments of the invention provide methods of laying out images based on time in a plurality of rows for display within a user interface. These embodiments of the invention can operate in the context of a
system environment 100 illustrated inFIG. 1 . Thesystem environment 100 includes animage server 110, anetwork 120, andclient devices FIG. 1 in order to clarify and simplify the description, a typical embodiment of thesystem environment 100 may include thousands or millions of client devices 130 connected toimage server 110 over thenetwork 120. - The
image server 110 receives images from the client devices 130 and performs a wide variety of tasks related to storing and sharing the images. After a user contributes images to theimage server 110, the contributing user can interact with theimage server 110 to share the images with other users, organize the images into memory boxes, identify and tag people in the images, and perform many other tasks. In addition, theimage server 110 can analyze the metadata of contributed images to find related images and perform facial recognition to automatically identify and tag recognized people in images. A detailed description of theimage server 110 is provided with reference toFIG. 2 . - The
network 120 relays communications between theimage server 110 and the client devices 130. In one embodiment, thenetwork 120 uses standard Internet communications technologies and/or protocols. Thus, thenetwork 120 can include link technologies such as Ethernet, IEEE 802.11, IEEE 806.16, WiMAX, 3GPP LTE, integrated services digital network (ISDN), asynchronous transfer mode (ATM), and the like. Similarly, the networking protocols used on thenetwork 120 can include the transmission control protocol/Internet protocol (TCP/IP), the hypertext transport protocol (HTTP), the simple mail transfer protocol (STMP), the file transfer protocol (FTP), and the like. The data exchanged over thenetwork 120 can be represented using a variety of technologies and/or formats including the hypertext markup language (HTML), the extensible markup language (XML), etc. In addition, all or some links can be encrypted using conventional encryption technologies such as the secure sockets layer (SSL), Secure HTTP (HTTPS) and/or virtual private networks (VPNs). In another embodiment, the entities can use custom and/or dedicated data communications technologies instead of, or in addition to, the ones described above. - The client devices 130 are electronic devices that are capable of communicating with the
image server 110 over thenetwork 120. For example, a client device 130 may be a smartphone, a personal digital assistant (PDA), a tablet computer, a laptop computer, or a desktop computer. A client device 130 may optionally include an integrated camera so that the device can be used to upload an image to theimage server 110 directly after capturing the image. The client devices 130 are further used to display collections of images served to client device 130 by theimage server 110. - A single user may use multiple client devices 130 to interact with the
image server 110 using the same user account. For example, a user can use afirst client device 130A (e.g., a smartphone) to capture an image, and upload the image to theimage server 110 using his or her user account. Later, the same user can use asecond client device 130B (e.g., a desktop computer) to access the same user account and view images from the user's collection of images in the account. -
FIG. 2 is block diagram of theimage server 110 depicted inFIG. 1 , in accordance with an embodiment of the invention. Theimage server 110 includes aninterface module 202, a user account database 206, ametadata analysis module 208, animage management module 210, animage store 212, a memorybox management module 214, amemory box database 216, astack creation module 218, and a timeline layout module 220. - The
interface module 202 handles communications between theimage server 110 and the client devices 130 via thenetwork 120. Theinterface module 202 receives communications from users, such as uploaded images and requests to share images and memory boxes, and passes the communications to the appropriate modules. Theinterface module 202 also receives outgoing data, such as images and notifications, from the other modules and sends the data to the appropriate client devices 130, for example to display a user's images in a user interface display on a client device 130, such as the user interface depicted inFIG. 9 . - The user account database 206 stores user accounts that contain information associated with users of the
image server 110. In one embodiment, each user account contains a unique identifier for the account, at least one piece of contact information for the corresponding user (e.g., an email address), and links to the user's accounts on external social networking services (e.g., FACEBOOK® or TWITTER®). A user account can also be a shared account that contains contact information, or social networking links corresponding to two users. Shared accounts allow two people (e.g., a married couple, close friends, or two people with some other relationship) to access theimage server 110 using the same user account while respecting that the two people are individuals with different identities. Since a user account is likely to contain sensitive data, the user account database 206 may be encrypted or subject to other data security techniques to protect the privacy of the users. - The
metadata analysis module 208 receives images and analyzes the metadata in the images to find related images in theimage store 212. In one embodiment, themetadata analysis module 208 contains submodules that attempt to match individual types of metadata. For example, themodule 208 may contain a subject matching submodule for finding other images that include the same people, a timestamp analysis submodule for finding other images that were taken at approximately the same time, and a location analysis submodule for finding other images that were taken at approximately the same location. In alternative embodiments, the module may also contain submodules for matching different types of metadata. Themetadata analysis module 208 can then aggregate the results from the submodules to generate a list of related images. - The
image management module 210 manages the images in theimage store 212. As used herein, an image is a media item that contains visual content, such as a photograph or a video captured by a user, and at least one item of metadata that describes the visual content. A detailed description of anexample image 300 is presented below with reference toFIG. 3 . Theimage management module 210 can modify an item of metadata in an image after receiving a corresponding request from a user (via the interface module 202) or from a different module of theimage server 110. Theimage management module 210 can also modify the visual content of an image. For example, theimage management module 210 may scale images down, for example to a predefined maximum width. Theimage management module 210 also processes requests to add or remove images from theimage store 212. An image is typically added to theimage store 212 after a user uploads the image to theimage server 110. In some embodiments, theimage management module 210 may also create a copy of an image that was previously added to theimage store 212. Theimage management module 210 can remove an image from theimage store 212 after receiving a request from the user who uploaded the image. - The memory
box management module 214 creates and deletes memory boxes in thememory box database 216. As used herein, a memory box is an object that defines a collection of images in theimage store 212, and a detailed description of an example memory box is presented below with reference toFIG. 4 . After a new user account is opened, the memorybox management module 214 automatically creates a primary memory box, and all of the user's uploaded images are added to the primary memory box by default. The memorybox management module 214 can also create additional memory boxes for a user account. A user may submit requests to create additional memory boxes as a way of categorizing and organizing their images on theimage server 110. For example, a user might wish to create separate memory boxes for images of each of her children. Users may also submit requests to delete memory boxes that they have created, and these requests are also processed by the memorybox management module 214. - The memory
box management module 214 also receives and processes requests to share memory boxes. After receiving a sharing request, either from theinterface module 202 or a different module of theimage server 110, the memorybox management module 214 accesses the requested memory box in thememory box database 216 and makes the requested change to the sharing settings of the requested memory box. - The
stack creation module 218 can create and maintain stacks of related images for display to a user within a user interface. The stacks visually organize the display of images to a user in a user interface so that the user's display is not overly cluttered with many related images. Instead, by using stacks, the related images can appear as a vertical stack of images having a cover image, and presenting a visual indication that distinguishes a stack from a single image, for example, by showing the border of several images slightly offset and behind the cover image to present the illusion of depth of a stack within the user interface. Examples of stacks are shown inFIG. 9A , such asstack 909. The stacks may be manually initiated by a user by selecting images to be grouped into a stack, for example by dragging and dropping an image on top of one or more other images from within a user interface, or the stacks may be automatically or semi-automatically created by thestack creation module 218 through analysis of themetadata 304 orimage data 302. The images may be grouped into a stack by time, such as all images having a timestamp within a particular time window, wherein the time window may be user configurable to be short (e.g., on the order of a few seconds) or long (e.g., a day or a week). Alternatively or additionally, the images may be grouped into a stack by image similarity, such as all images having similar image content in the image data 302 (e.g., similar group of people, similar pose, similar background, etc.) as determined through analysis by a third-party image comparison tool (not shown), or as determined based on the images being from a similar location and/or a perceptual match of the images. Thestack creation module 218 may also determine which of a group of stacked images should be chosen as the cover image for the stack, for example, by determining which image of the group is the highest quality. The highest quality image may be determined to be the image showing the face of a person that appears frequently in the user's images at different times and locations (an “important person” to the user), and where the important person's eyes are open, is smiling, etc. Thestack creation module 218 may also updatemetadata 304 corresponding to each image to indicate that it is part of a stack, for example by assigning a unique stack identifier to each stack, and recording the respective stack identifier in themetadata 304 of the images in the stack, and whether the image is the cover image of the respective stack. - The time
line layout module 220 may be part of theimage server 110, or in other embodiments, may reside in an application on the client devices 130. Regardless of its physical location, the timeline layout module 220 formats the display of images according to time. Specifically, the timeline layout module 220 may practice the method illustrated inFIG. 6 to layout images in a plurality of rows, which results in a layout such as shown in the example ofFIG. 8 , both of which are discussed in greater detail below. -
FIG. 3 is a block diagram illustrating an example data structure for animage 300 such as an image from theimage store 212, in accordance with an embodiment of the invention. Theimage 300 containsimage data 302 andmetadata 304. - The
image data 302 is the visual content of theimage 300. As described with reference to theimage store 212 ofFIG. 2 , theimage data 302 may be a photograph or a video. Theimage data 302 may be compressed using any combination of lossless or lossy compression methods known in the art, such as run-length encoding, entropy encoding, chroma subsampling, or transform coding. Theimage data 302 may also include a stored perceptual value for images, such as a perceptual hash, for use in finding pixel-similar images. The stored perceptual data is used to find pixel-based similarities to determine if two images are duplicates or near duplicates. - The
metadata 304 includes acontributor account identifier 306, sharingsettings 308,location data 310, atimestamp 312, anactivity 314, and tags of recognizedpeople 316. Themetadata 304 may include additional or different information that is not explicitly shown inFIG. 3 , such as identifying information for the camera that was used to capture theimage data 302, the optical settings that were used to capture the image data 302 (e.g., shutter speed, focal length, f-number), the resolution of theimage data 302, or a caption for theimage 300. - The
contributor account identifier 306 identifies the user account that was used to upload theimage 300 to theimage server 110. In one embodiment, thecontributor account identifier 306 is the unique account identifier described with reference to the user account store 206 ofFIG. 2 . Alternatively, thecontributor account identifier 306 may be an item of contact information corresponding to the contributor account or some other piece of identifying information. - The sharing
settings 308 is a list of identifiers for additional user accounts and sharing privileges that have been given to each additional user account. Sharing privileges specify the level of access that the contributing user has granted to the additional user accounts. For example, a first user account may only be allowed to view the image, whereas a second user may be allowed to view the image and add tags for additional recognizedpeople 316. In general, the sharingsettings 308 may be used to specify a different set of sharing privileges for each additional user account, and each set of sharing privileges specifies which items ofmetadata 304 the user account is allowed to change. Defining sharingsettings 308 in themetadata 304 of animage 300 beneficially allowsindividual images 300 to be shared between users. - In an alternative embodiment, the sharing
settings 308 are omitted, and the sharing privileges granted to users are instead stored in association with the corresponding user accounts. In this case, each user account would include a list of identifiers forimages 300 that have been shared with the user account, and the sharing privileges that have been granted to the user account for eachimage 300 would be stored in the user account in association with the corresponding image identifier. In addition, a user may save a caption or some other sort of user-specific annotation in association with an image identifier in his user profile. Saving captions and other user-specific annotations in the corresponding user profiles beneficially allows multiple users to assign different annotations to the same sharedimage 300. - The
location data 310 is information that identifies where theimage 300 was taken. Thelocation data 310 may include, for example, coordinates from a global navigation satellite system (GNSS) which are retrieved and recorded by the camera at the time theimage 300 is taken. Alternatively, a user may manually add GNSS coordinates to an image at some point after theimage 300 is taken. Thelocation data 310 may also contain a textual location descriptor that provides a user-readable label for where theimage 300 was taken. For example, the location descriptor may be “Home,” “Soccer Field,” or “San Francisco.” A user may manually add a location descriptor. Alternatively or additionally, the location of an image can be determined in some circumstances based on the IP address of the device used to upload the image at the time of capture of the image. For example, if a user uploads an image from a smart phone at the time of the capture, but the uploaded image does not contain geo-data, the IP address of the user's device at the time the user uploaded the image can be used to estimate the location of the image. - The
timestamp 312 is the date and time at which theimage data 302 was captured. Thetimestamp 312 may be retrieved from an internal clock of the camera and recorded at the time theimage 300 is taken, or it may be manually added or modified by the user after theimage 300 is taken. Thetimestamp 312 can be used for ordering images by the time that they were captured on a time line for display to a user, as will be described in greater detail with reference toFIG. 6 . Additionally, timestamps can also be recorded when any changes to theimage data 302 or themetadata 304 is stored. These timestamps can be useful in synchronizing data that has been communicated to a client device, as will be described in greater detail with reference toFIG. 5 . - The
activity 314 identifies an event at which theimage data 302 was captured (e.g., “soccer game,” “summer vacation,” “birthday party,” “high school graduation,” etc.) or an action in which the people in the image are engaged (e.g., “playing soccer,” “swimming,” “eating cake,” “graduating from high school,” etc.). A user may manually define theactivity 314 based on pre-existing knowledge of the context in which theimage 300 was taken. For example, if a user took a series of images at a soccer game that occurred between 2 PM and 5 PM on Saturday at a local park, then the user can manually define theactivity 314 for those images as “playing soccer” or “soccer game” or “Saturday afternoon soccer game” or any other descriptive text for the activity that the user chooses. After the user uploads theimage 300 to theimage server 110, themetadata analysis module 208 may instead suggest or assign theactivity 314 recorded for other images having similar timestamps to be the activity for theimage 300. For example, if theimage store 212 contains 10 images with timestamps within a 15 minute window that all have theactivity 314 “playing soccer,” when anotherimage 300 is uploaded with a timestamp within the same 15 minute window, themetadata analysis module 208 may suggest or assign theactivity 314 “playing soccer” as theactivity 314 of theimage 300. - The tags for
recognized people 316 identify people who are shown in the image data. Thetags 316 may be manually added by the user, automatically added by the facial recognition module (not shown), or added based on a combination of automatic facial recognition and user input. In one embodiment, thetags 316 are links to a corresponding facial recognition model in a facial recognition model storage. Alternatively, eachtag 316 may simply specify a name for the recognized person. - In one embodiment, the
metadata 304 is used in a multi-step process for determining if two images are duplicates, even if they are in a different format, size, or encoding. First, themetadata 304 of the images are considered, including file name, date, size, and source. Then, an MD5 Hash of each image is created. Finally, a perceptual match of the image is performed to determine if it is in face the same image as perceived by the user, but happens to be reformatted, resized, or in a different encoding. For videos specifically, a similar process can be performed, but for a pair of videos with different encoding, formats, and resolutions, rather than the MD5 hash match, a perceptual match is performed instead. A sampling of static frames from the same points in each of the videos is extracted and compared using perceptual pixel matching. - In one embodiment, the recognition of duplicates or near duplicates is useful so that rotations and tagging applied to one of a group of highly similar images can be automatically applied to the others. Thus, if a user tags (with recognized people, location, and/or activity) and/or rotates an image that has a very similar perceptual match and/or capture time to other images, the
metadata analysis module 220 may suggest to the user or automatically apply the same adjustments to the other images. -
FIG. 4 is a block diagram illustrating an example data structure for implementing amemory box 400 from thememory box database 216, in accordance with an embodiment of the invention. As illustrated, the example data structure includes links toimages 402, acontributor account identifier 404, and sharingsettings 406. - The links to
images 402identify images 300 in theimage store 212 that are in thememory box 400. Thus, animage 300 is ‘added’ to thememory box 400 when a link to theimage 300 is added to the image links 402, and animage 300 is ‘removed’ from thememory box 400 when the corresponding link is removed. In an alternative embodiment, thememory box 400 may hold a copy of the images instead. However, usinglinks 402 is beneficial because the same image can be used in multiple memory boxes without being copied multiple times. This reduces the amount of storage space that is used on theimage server 110 and maintains a single set ofmetadata 304 for the image. - The
contributor account identifier 404 identifies the user account for which thememory box 400 was created. Similar to thecontributor account identifier 306 described with reference to theimage 300 ofFIG. 3 , thecontributor account identifier 404 in amemory box 400 may be the unique account identifier described with reference to the user account store 206 ofFIG. 2 . Alternatively, thecontributor account identifier 404 may be an item of contact information corresponding to the contributor account or some other piece of identifying information. The user account identified in thecontributor account identifier 404 has full access privileges to thememory box 404 and is able to add and removeimage links 402 and modify the sharingsettings 406 of thememory box 400. - Similar to the sharing
settings 308 of theimage 300 described with reference toFIG. 3 , the sharingsettings 406 for thememory box 400 is a list of identifiers for additional user accounts and a set of sharing privileges that have been given to each additional user account. A set of sharing privileges defines the specific read/write privileges of an additional user. For example, a set of sharing privileges may specify whether a user is allowed to add images, remove images, or modify images in the memory box 400 (e.g., by adding tags for recognized people or modifying other metadata). A set of sharing privileges may also specify whether a user is allowed to view every image in the memory box or only a subset of the images in thememory box 400. If the user is only allowed to view a subset of the images, then the sharing privileges may define the subset. Alternatively, the subset of images may be predefined (e.g., images that were taken at a particular location, on a particular date, etc). - As described above with reference to the sharing
settings 308 of theimage 300 inFIG. 3 , the sharingsettings 406 of thememory box 400 may omitted in favor of storing the sharing privileges formemory boxes 400 in association with the corresponding user accounts. Thus, each user account in the user account database 206 would include a list of identifiers formemory boxes 400 that have been shared with the user account and the corresponding sharing privileges that have been granted. -
FIG. 5 is a flow chart illustrating a method of serving images for display on a client device 130, in accordance with an embodiment of the invention. The method can be advantageously used to speed the delivery of images and related data over anetwork 120 for display on a client device 130, and to conserve the amount of bandwidth consumed by updating the images and related data on the client device 130. - In
step 501, for each stored image, a unique identifier for the image and parameters associated with the image are converted into a binary string. Examples of parameters may include parameters from themetadata 304, and/or a variety of other parameters. Specific examples of parameters include: whether the image is from a photo or video, whether the image is enhanced or not, what the source of the image is, whether the image is rotated or not, whether the image is stacked or not, whether it is the cover image of a stack or not, the timestamp from when the image was created, the timestamp from when the image was imported to theimage server 110, etc. By converting the unique identifier for the image and the parameters associated with the image into a binary string, the relevant information can be conveyed in an efficient package and the overall size of the data conveyed is much smaller. In some cases, the binary string is represented in hexadecimal notation to make the string more compact. - In
step 502, responsive to a received request from a client device 130 for display of images, a timestamp is set. This timestamp marks the time at which the client request was received. The client request may specify only a subset of images in amemory box 400 through the user's account at theimage server 110 that the user wants to view. For example, the client request may be as a result of a query for images that satisfy a certain metadata criterion or combinations of metadata criteria, such as the people in the images (recognized people 316) the location in which the images were taken (location data 310), and/or the activity shown in the images (activity 314). Additionally or alternatively, the client request may specify only images from within a particular timeframe that a user wants to view on a time line, or the starting timeframe may be set by default (e.g., a default of images from this month, images from this year, etc.). - In
step 503, all requested images are served to the client with the respective binary strings. Although the compactness of the binary string reduces the amount of data that is transferred over thenetwork 120 when the requested images are served from theimage server 110 to the client device 130, the amount of data in the binary string is dwarfed by the size of theimage data 302 for the requested images. The requested images and the respective binary strings received from theimage server 110 are stored in local cache on the client device 130. - In
step 504, updates to the stored images and the associated parameters are recorded by theimage server 110, including the respective time of the update. The updates can include new images uploaded to theimage server 110, or changes to metadata of the images. - Subsequently, in
step 505, responsive to the next received request from the client device 130 for the display of images, all the updates since the previously set timestamp are served to the client device 130, and a new timestamp is set. In this way, only the changes to stored images and associated parameters since the last request for images from that client device 130 are communicated to the client device 130, rather than serving the entire set of images with their respective binary strings again. By serving only the information delta representing the incremental changes to the stored images and associated parameters since the last set timestamp, significantly less data is transferred between theimage server 110 and the client device 130, thus conserving bandwidth and processing resources. In cases where theimage data 302 is unchanged from the previously set timestamp, and merely one or more associated parameters are changed, it is unnecessary to send another copy of theimage data 302 to the client device 130. Rather, only the updated respective binary string is sent instead. As indicated by the arrow looping fromstep 505 back to step 504 in the flow chart illustrated inFIG. 5 ,steps image server 110, and subsequent requests for the display of images are received. Each time, only the updates since the previous timestamp need to be served in order to ensure that the client device 130 has the most current version of the images and associated parameters available from the user's account. -
FIG. 6 is a flow chart illustrating a method for laying out images based on time in a plurality of rows for display within a user interface, in accordance with an embodiment of the invention. In one implementation, the method illustrated inFIG. 6 is performed by a timeline layout module 220 of animage server 110. In other implementations, the method illustrated inFIG. 6 may be performed by a similar module residing on a client device 130. It is assumed that the module performing the lay out has access to the images and a respective timestamp for each image that indicates when the image was created. - In
step 601, the timeline layout module 220 finds the next image in order by time by referencing the timestamps of the images. For convenience, all images to be laid out may be first ordered by their timestamps so that they are queued for quick placement. Otherwise, the next image in order by time may be identified from the remaining images to be laid out eachtime step 601 is iterated. At the beginning of building the layout, the next image may be the closest image in time to a user-provided or default first end of the time line. However, the first end of the time line can either be the most recent end of the timeline or the oldest end of the timeline. In other words, the time line can be built from a point in time forward in time or from a point in time backward in time, depending on the user's preferences. Likewise, the time line can be built horizontally or vertically, depending on the user's preferences. For clarity, this description will refer to a horizontally oriented timeline. This description will use the term “row” to refer to horizontally oriented rows for a horizontally oriented time line and also use the term “row” to refer to vertically oriented columns in a vertically oriented time line. - In
step 602, the shortest row of a plurality of rows is determined based on an accumulated length of each row. At the beginning, all rows may be zero in length, but as images are added to the rows, they will grow in length. In a horizontally oriented time line, each row will grow in lateral extent by the width of an added image plus the width of a border or blank space between the previous image in the row and the added image. In a vertically oriented time line, each row will grow in vertical extent by the height of an added image plus the height of a border or blank space between the previous image in the row. The accumulated length of each row is the sum of the widths of the images in the row plus the sum of the blank space between each of the images in the row. In cases where two rows are the same length, the tie may be broken in favor of the row closest to the top of the user interface for horizontally oriented timelines or closest to the left of the user interface for vertically oriented timelines. - In
step 603, the image is placed in the shortest row. The placement of the image in the row may be spaced so as to maintain a border or blank space between the previous image in the row and the newly placed image, and the borders or blank spaces between each image in the various rows may be approximately the same to provide a cohesive and visually pleasing layout. - As indicated by the arrow looping from
step 603 back to step 601 in the flow chart illustrated inFIG. 6 , steps 601-603 can be iterated until there are no more images to layout based on time, or at least until a sufficient number of images have been placed in the plurality of rows to fill a screen of the client device 130 that is used to display the layout. Because the images vary in dimensions, the resulting visual display of the images in the user interface based on the layout method ofFIG. 6 does not resemble grids that are commonly used in the prior art. An example of a prior art grid layout of 15 images is shown inFIG. 7 . In this example,image 15 is the newest andimage 1 is the oldest. The dotted lines divide the display into approximately equal sections where the images 1-15 of various sizes are centered.FIG. 8 is an example of a non-grid layout of images in a plurality of rows, in accordance with an embodiment of the invention. This example applies the method ofFIG. 6 to lay out the same 15 images fromFIG. 7 into three horizontally orientedrows blank space 808 between each image in each of the rows is approximately the same. In this example, note that as the method places the images 15-8 into therows image 7 into a row, it is noted that the accumulated length of row 801 (comprising the width ofimage 15, the width ofimage 12, the width ofimage 9, and two blank spaces between the images) is shorter than the accumulated length of row 802 (comprising the width ofimage 14, the width ofimage 11, the width ofimage 8, and two blank spaces between the images) and shorter than the accumulated length of row 803 (comprising the width ofimage 13, the width ofimage 10, and one blank space between the images). Thus,image 7 is placed intorow 801. Subsequently,image 6 is placed intorow 803 because 803 became the new shortest row with the placement ofimage 7 inrow 801, and so on until all of the images to be displayed are laid out. - In one embodiment, the method only lays out images that the user can currently view on the screen of the client device 130, and waits for the user to scroll in either direction to lay out further images in that direction. In another embodiment, the method lays out images that the user can currently view on the screen of the client device 130 plus an off-screen margin on either side of the current view, so that small changes of the view in either direction (e.g., by the user scrolling slightly to the left or right) can be accommodated without need to perform further layout of images.
-
FIG. 9A is another example of a non-grid layout of images in a plurality of rows 901-903, including some images organized into stacks (e.g., stack 909), in accordance with an embodiment of the invention. The example ofFIG. 9A includes both photographs (e.g., photograph 991) and videos (e.g., video 992) organized by time.FIG. 9B is another example of a non-grid layout of images in a plurality of rows 904-905, with a time line displayed 910 between therows time line 910 indicate the date that the associated images were captured. In this example, all images captured on the same day are grouped together, for example by including one image from the day in a larger size (e.g.,large image 911 taken Feb. 19, 2012) and the remaining images from the same day as a series of images in smaller size (e.g.,small images 912, 913) in a grid pattern next to the larger image, in accordance with an embodiment of the invention. The non-grid layout of the larger images in the tworows time line 910 may follow the technique described above with reference toFIG. 6 . -
FIG. 10 is a block diagram of the components of acomputing system 1000 for use, for example, as theimage server 110 or client devices 130 depicted inFIG. 1 , in accordance with an embodiment of the invention. Illustrated are at least oneprocessor 1002 coupled to achipset 1004. Also coupled to the chipset 804 are amemory 1006, astorage device 1008, akeyboard 1010, agraphics adapter 1012, apointing device 1014, anetwork adapter 1016, and acamera 1024. Adisplay 1018 is coupled to thegraphics adapter 1012. In one embodiment, the functionality of thechipset 1004 is provided by amemory controller hub 1020 and an I/O controller hub 1022. In another embodiment, thememory 1006 is coupled directly to theprocessor 1002 instead of thechipset 1004. - The
storage device 1008 is any non-transitory computer-readable storage medium, such as a hard drive, compact disk read-only memory (CD-ROM), DVD, or a solid-state memory device. Thememory 1006 holds instructions and data used by theprocessor 1002. Thepointing device 1014 may be a mouse, track ball, or other type of pointing device, and is used in combination with thekeyboard 1010 to input data into thecomputer 1000. Thegraphics adapter 1012 displays images and other information on thedisplay 1018. Thenetwork adapter 1016 couples thecomputer 1000 to a network. - The
camera 1024 captures digital photographs and videos. As is known in the art, thecamera 1024 includes an image sensor and an optical system (e.g., one or more lenses and a diaphragm), in addition to other components. Thecamera 1024 may also include a microphone for capturing audio data, either as standalone audio or in conjunction with video data captured by the image sensor. In one embodiment, thecamera 1024 is a separate computing device with its own processor, storage medium, and memory (e.g., a point-and-shoot or DSLR camera), and thecamera 1024 is coupled to the I/O controller hub through an external connection (e.g., USB). Alternatively, the camera 024 may be a component of the computing system 1000 (e.g., an integrated camera in a smart phone, tablet computer, or PDA). - As is known in the art, a
computer 1000 can have different and/or other components than those shown inFIG. 10 . In addition, thecomputer 1000 can lack certain illustrated components. In one embodiment, acomputer 1000 acting as a server may lack akeyboard 1010,pointing device 1014,graphics adapter 1012,display 1018, and/orcamera 1024. Moreover, thestorage device 1008 can be local and/or remote from the computer 1000 (such as embodied within a storage area network (SAN)). - As is known in the art, the
computer 1000 is adapted to execute computer program modules for providing functionality described herein. As used herein, the term “module” refers to computer program logic utilized to provide the specified functionality. Thus, a module can be implemented in hardware, firmware, and/or software. In one embodiment, program modules are stored on thestorage device 1008, loaded into thememory 1006, and executed by theprocessor 1002. - Embodiments of the physical components described herein can include other and/or different modules than the ones described here. In addition, the functionality attributed to the modules can be performed by other or different modules in other embodiments. Moreover, this description occasionally omits the term “module” for purposes of clarity and convenience.
- Some portions of the above description describe the embodiments in terms of algorithmic processes or operations. These algorithmic descriptions and representations are commonly used by those skilled in the data processing arts to convey the substance of their work effectively to others skilled in the art. These operations, while described functionally, computationally, or logically, are understood to be implemented by computer programs comprising instructions for execution by a processor or equivalent electrical circuits, microcode, or the like. Furthermore, it has also proven convenient at times, to refer to these arrangements of functional operations as modules, without loss of generality. The described operations and their associated modules may be embodied in software, firmware, hardware, or any combinations thereof.
- The present invention also relates to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored on a computer readable medium that can be accessed by the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of computer-readable storage medium suitable for storing electronic instructions, and each coupled to a computer system bus. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.
- As used herein any reference to “one embodiment” or “an embodiment” means that a particular element, feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
- As used herein, the terms “comprises,” “comprising,” “includes,” “including,” “has,” “having” or any other variation thereof, are intended to cover a non-exclusive inclusion. For example, a process, method, article, or apparatus that comprises a list of elements is not necessarily limited to only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Further, unless expressly stated to the contrary, “or” refers to an inclusive or and not to an exclusive or. For example, a condition A or B is satisfied by any one of the following: A is true (or present) and B is false (or not present), A is false (or not present) and B is true (or present), and both A and B are true (or present).
- In addition, use of the “a” or “an” are employed to describe elements and components of the embodiments herein. This is done merely for convenience and to give a general sense of the disclosure. This description should be read to include one or at least one and the singular also includes the plural unless it is obvious that it is meant otherwise.
- Upon reading this disclosure, those of skill in the art will appreciate still additional alternative structural and functional designs for performing the embodiments of the invention. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the present invention is not limited to the precise construction and components disclosed herein and that various modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope as defined in the appended claims.
Claims (17)
1. A computer-implemented method for storing and serving images and associated properties, the method comprising:
storing images in a computer system, wherein the images are associated with unique identifiers and parameters;
for each of the stored images, converting a unique identifier and parameters associated with the image to a binary string;
responsive to a first request from a client for display of a first set of the stored images, setting a first timestamp;
serving the first set of the stored images and their respective binary strings to the client;
recording updates to the stored images and associated parameters, and respective times for the updates; and
responsive to a second request from the client for display of a second set of the stored images, serving the updates since the first timestamp and respective times for the updates.
2. The computer-implemented method of claim 1 , further comprising:
creating at least one stack of images from the images stored in the computer system, wherein the first request or the second request from the client specifies to display a stack of images stored in the computer system.
3. The computer-implemented method of claim 2 , wherein the parameters associated with the stored images includes a unique stack identifier for the at least one stack of images.
4. The computer-implemented method of claim 2 , wherein the updates include identifications for a recently created stack of images.
5. The computer-implemented method of claim 2 , wherein the parameters associated with the stored images specify whether the image is a member of a stack of images.
6. The computer-implemented method of claim 2 , wherein the parameters associated with the stored images specify whether the image is a cover image of a stack of images.
7. The computer-implemented method of claim 2 , wherein the at least one stack of images is automatically created by the computer system based on a predetermined criterion.
8. The computer-implemented method of claim 1 , further comprising:
responsive to the second request from the client for display of the second set of the stored images, setting a second timestamp; and
recording additional updates to the stored images and associated parameters, and respective times of the additional updates.
9. The computer-implemented method of claim 8 , further comprising:
responsive to a third request from the client for display of images, serving the additional updates since the second timestamp and the respective times of the additional updates.
10. The computer-implemented method of claim 1 , wherein the binary string is represented in hexadecimal notation.
11. The computer-implemented method of claim 1 , wherein the first request or the second request from the client specifies to display stored images within a particular timeframe.
12. The computer-implemented method of claim 1 , wherein the first request or the second request from the client specifies to display stored images taken at a particular location or locations.
13. The computer-implemented method of claim 1 , wherein the first request or the second request from the client specifies to display stored images associated with a particular activity.
14. The computer-implemented method of claim 1 , further comprising:
storing the unique identifiers of the images in a plurality of memory boxes in the computer system, wherein the first request or the second request from the client specifies to display stored images associated with one of the plurality of memory boxes.
15. The computer-implemented method of claim 14 , further comprising:
storing sharing privileges for images associated with each memory box, wherein the sharing privileges define whether images specified in the first request or the second request are allowed to be shared to a user.
16. The computer-implemented method of claim 1 , wherein the updates include images which are recently stored in the computer system after the first timestamp.
17. The computer-implemented method of claim 1 , wherein the updates include changes to metadata associated with the images stored in the computer system.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/539,709 US20150074107A1 (en) | 2012-06-15 | 2014-11-12 | Storing and serving images in memory boxes |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201213525076A | 2012-06-15 | 2012-06-15 | |
US13/525,037 US8861804B1 (en) | 2012-06-15 | 2012-06-15 | Assisted photo-tagging with facial recognition models |
US14/539,709 US20150074107A1 (en) | 2012-06-15 | 2014-11-12 | Storing and serving images in memory boxes |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US201213525076A Continuation | 2012-06-15 | 2012-06-15 |
Publications (1)
Publication Number | Publication Date |
---|---|
US20150074107A1 true US20150074107A1 (en) | 2015-03-12 |
Family
ID=51661196
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/525,037 Active 2032-12-03 US8861804B1 (en) | 2012-06-15 | 2012-06-15 | Assisted photo-tagging with facial recognition models |
US14/478,585 Active US9063956B2 (en) | 2012-06-15 | 2014-09-05 | Assisted photo-tagging with facial recognition models |
US14/539,709 Abandoned US20150074107A1 (en) | 2012-06-15 | 2014-11-12 | Storing and serving images in memory boxes |
US14/731,833 Active US9378408B2 (en) | 2012-06-15 | 2015-06-05 | Assisted photo-tagging with facial recognition models |
US15/187,189 Active 2032-07-07 US10043059B2 (en) | 2012-06-15 | 2016-06-20 | Assisted photo-tagging with facial recognition models |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/525,037 Active 2032-12-03 US8861804B1 (en) | 2012-06-15 | 2012-06-15 | Assisted photo-tagging with facial recognition models |
US14/478,585 Active US9063956B2 (en) | 2012-06-15 | 2014-09-05 | Assisted photo-tagging with facial recognition models |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/731,833 Active US9378408B2 (en) | 2012-06-15 | 2015-06-05 | Assisted photo-tagging with facial recognition models |
US15/187,189 Active 2032-07-07 US10043059B2 (en) | 2012-06-15 | 2016-06-20 | Assisted photo-tagging with facial recognition models |
Country Status (1)
Country | Link |
---|---|
US (5) | US8861804B1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20170206197A1 (en) * | 2016-01-19 | 2017-07-20 | Regwez, Inc. | Object stamping user interface |
US10438014B2 (en) * | 2015-03-13 | 2019-10-08 | Facebook, Inc. | Systems and methods for sharing media content with recognized social connections |
CN110830678A (en) * | 2019-11-14 | 2020-02-21 | 威创集团股份有限公司 | Multi-channel video signal synchronous output method, device, system and medium |
US10853406B2 (en) * | 2015-09-18 | 2020-12-01 | Commvault Systems, Inc. | Data storage management operations in a secondary storage subsystem using image recognition and image-based criteria |
US20220092253A1 (en) * | 2020-09-18 | 2022-03-24 | Fujifilm Business Innovation Corp. | Information processing apparatus and non-transitory computer readable medium |
Families Citing this family (86)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9785653B2 (en) * | 2010-07-16 | 2017-10-10 | Shutterfly, Inc. | System and method for intelligently determining image capture times for image applications |
JP5814700B2 (en) * | 2011-08-25 | 2015-11-17 | キヤノン株式会社 | Image processing system and image processing method |
US9122912B1 (en) * | 2012-03-15 | 2015-09-01 | Google Inc. | Sharing photos in a social network system |
CN103620648B (en) * | 2012-05-29 | 2018-02-02 | 松下知识产权经营株式会社 | Image evaluation apparatus, image evaluation method, integrated circuit |
US9413906B2 (en) * | 2012-09-28 | 2016-08-09 | Interactive Memories Inc. | Method for making relevant content proposals based on information gleaned from an image-based project created in an electronic interface |
US9465813B1 (en) | 2012-11-09 | 2016-10-11 | Amazon Technologies, Inc. | System and method for automatically generating albums |
US9330301B1 (en) * | 2012-11-21 | 2016-05-03 | Ozog Media, LLC | System, method, and computer program product for performing processing based on object recognition |
US9336435B1 (en) * | 2012-11-21 | 2016-05-10 | Ozog Media, LLC | System, method, and computer program product for performing processing based on object recognition |
US9189468B2 (en) * | 2013-03-07 | 2015-11-17 | Ricoh Company, Ltd. | Form filling based on classification and identification of multimedia data |
US9405771B2 (en) * | 2013-03-14 | 2016-08-02 | Microsoft Technology Licensing, Llc | Associating metadata with images in a personal image collection |
JP6221305B2 (en) * | 2013-03-29 | 2017-11-01 | 富士通株式会社 | Information processing device |
US9253266B2 (en) * | 2013-05-03 | 2016-02-02 | Spayce, Inc. | Social interaction using facial recognition |
KR20150007723A (en) * | 2013-07-12 | 2015-01-21 | 삼성전자주식회사 | Mobile apparutus and control method thereof |
US9934536B2 (en) | 2013-09-20 | 2018-04-03 | Bank Of America Corporation | Interactive map for grouped activities within a financial and social management system |
US9786019B2 (en) | 2013-09-20 | 2017-10-10 | Bank Of America Corporation | Grouped packages for a financial and social management system |
US10002395B2 (en) | 2013-09-20 | 2018-06-19 | Bank Of America Corporation | Interactive mapping system for user experience augmentation |
US9324114B2 (en) * | 2013-09-20 | 2016-04-26 | Bank Of America Corporation | Interactive map for grouped activities within a financial and social management system |
US9324115B2 (en) | 2013-09-20 | 2016-04-26 | Bank Of America Corporation | Activity review for a financial and social management system |
US9786018B2 (en) | 2013-09-20 | 2017-10-10 | Bank Of America Corporation | Activity list enhanced with images for a financial and social management system |
US9323852B2 (en) | 2013-09-20 | 2016-04-26 | Bank Of America Corporation | Activity list filters for a financial and social management system |
US20150120443A1 (en) * | 2013-10-30 | 2015-04-30 | International Business Machines Corporation | Identifying objects in photographs |
US20150317511A1 (en) * | 2013-11-07 | 2015-11-05 | Orbeus, Inc. | System, method and apparatus for performing facial recognition |
KR20150116641A (en) * | 2014-04-08 | 2015-10-16 | 한국과학기술연구원 | Apparatus for recognizing image, method for recognizing image thereof, and method for generating face image thereof |
US10540541B2 (en) | 2014-05-27 | 2020-01-21 | International Business Machines Corporation | Cognitive image detection and recognition |
EP3172683A4 (en) * | 2014-07-25 | 2018-01-10 | Samsung Electronics Co., Ltd. | Method for retrieving image and electronic device thereof |
CN105824840B (en) * | 2015-01-07 | 2019-07-16 | 阿里巴巴集团控股有限公司 | A kind of method and device for area label management |
US10013620B1 (en) | 2015-01-13 | 2018-07-03 | State Farm Mutual Automobile Insurance Company | Apparatuses, systems and methods for compressing image data that is representative of a series of digital images |
US10091296B2 (en) | 2015-04-17 | 2018-10-02 | Dropbox, Inc. | Collection folder for collecting file submissions |
US9692826B2 (en) | 2015-04-17 | 2017-06-27 | Dropbox, Inc. | Collection folder for collecting file submissions via a customizable file request |
US10885209B2 (en) | 2015-04-17 | 2021-01-05 | Dropbox, Inc. | Collection folder for collecting file submissions in response to a public file request |
US10621367B2 (en) | 2015-04-17 | 2020-04-14 | Dropbox, Inc. | Collection folder for collecting photos |
CN106295489B (en) * | 2015-06-29 | 2021-09-28 | 株式会社日立制作所 | Information processing method, information processing device and video monitoring system |
US10373277B2 (en) * | 2015-07-02 | 2019-08-06 | Goldman Sachs & Co. LLC | System and method for electronically providing legal instrument |
US11144591B2 (en) * | 2015-07-16 | 2021-10-12 | Pomvom Ltd. | Coordinating communication and/or storage based on image analysis |
CN107852438B (en) * | 2015-07-30 | 2021-03-19 | Lg电子株式会社 | Mobile terminal and control method thereof |
US9830727B2 (en) * | 2015-07-30 | 2017-11-28 | Google Inc. | Personalizing image capture |
US10863003B2 (en) * | 2015-09-10 | 2020-12-08 | Elliot Berookhim | Methods, devices, and systems for determining a subset for autonomous sharing of digital media |
US20170128843A1 (en) * | 2015-09-28 | 2017-05-11 | Versaci Interactive Gaming, Inc. | Systems, methods, and apparatuses for extracting and analyzing live video content |
US9826001B2 (en) * | 2015-10-13 | 2017-11-21 | International Business Machines Corporation | Real-time synchronous communication with persons appearing in image and video files |
US20170118079A1 (en) * | 2015-10-24 | 2017-04-27 | International Business Machines Corporation | Provisioning computer resources to a geographical location based on facial recognition |
US9798742B2 (en) * | 2015-12-21 | 2017-10-24 | International Business Machines Corporation | System and method for the identification of personal presence and for enrichment of metadata in image media |
US10664500B2 (en) | 2015-12-29 | 2020-05-26 | Futurewei Technologies, Inc. | System and method for user-behavior based content recommendations |
US10713966B2 (en) | 2015-12-31 | 2020-07-14 | Dropbox, Inc. | Assignments for classrooms |
US9679426B1 (en) | 2016-01-04 | 2017-06-13 | Bank Of America Corporation | Malfeasance detection based on identification of device signature |
US10373131B2 (en) | 2016-01-04 | 2019-08-06 | Bank Of America Corporation | Recurring event analyses and data push |
US9977950B2 (en) * | 2016-01-27 | 2018-05-22 | Intel Corporation | Decoy-based matching system for facial recognition |
US9830055B2 (en) | 2016-02-16 | 2017-11-28 | Gal EHRLICH | Minimally invasive user metadata |
EP3229174A1 (en) * | 2016-04-06 | 2017-10-11 | L-1 Identity Solutions AG | Method for video investigation |
US10380429B2 (en) | 2016-07-11 | 2019-08-13 | Google Llc | Methods and systems for person detection in a video feed |
US10957171B2 (en) | 2016-07-11 | 2021-03-23 | Google Llc | Methods and systems for providing event alerts |
US10200560B2 (en) * | 2017-01-13 | 2019-02-05 | Adobe Inc. | Automated sharing of digital images |
US11321951B1 (en) | 2017-01-19 | 2022-05-03 | State Farm Mutual Automobile Insurance Company | Apparatuses, systems and methods for integrating vehicle operator gesture detection within geographic maps |
US10095915B2 (en) | 2017-01-25 | 2018-10-09 | Chaim Mintz | Photo subscription system and method using biometric identification |
US11222227B2 (en) | 2017-01-25 | 2022-01-11 | Chaim Mintz | Photo subscription system and method using biometric identification |
US11100489B2 (en) * | 2017-01-31 | 2021-08-24 | Paypal, Inc. | Accessing accounts at payment system via photos |
US10284505B2 (en) * | 2017-05-03 | 2019-05-07 | International Business Machines Corporation | Social media interaction aggregation for duplicate image posts |
WO2018217193A1 (en) * | 2017-05-24 | 2018-11-29 | Google Llc | Bayesian methodology for geospatial object/characteristic detection |
US11256951B2 (en) * | 2017-05-30 | 2022-02-22 | Google Llc | Systems and methods of person recognition in video streams |
US11783010B2 (en) | 2017-05-30 | 2023-10-10 | Google Llc | Systems and methods of person recognition in video streams |
US10599950B2 (en) | 2017-05-30 | 2020-03-24 | Google Llc | Systems and methods for person recognition data management |
EP3648008A4 (en) * | 2017-06-30 | 2020-07-08 | Guangdong Oppo Mobile Telecommunications Corp., Ltd. | Face recognition method and apparatus, storage medium, and electronic device |
CN109389019B (en) * | 2017-08-14 | 2021-11-05 | 杭州海康威视数字技术股份有限公司 | Face image selection method and device and computer equipment |
US10366279B2 (en) | 2017-08-29 | 2019-07-30 | Bank Of America Corporation | System for execution of multiple events based on image data extraction and evaluation |
US10664688B2 (en) | 2017-09-20 | 2020-05-26 | Google Llc | Systems and methods of detecting and responding to a visitor to a smart home environment |
US11134227B2 (en) | 2017-09-20 | 2021-09-28 | Google Llc | Systems and methods of presenting appropriate actions for responding to a visitor to a smart home environment |
WO2019077013A1 (en) | 2017-10-18 | 2019-04-25 | Soapbox Labs Ltd. | Methods and systems for processing audio signals containing speech data |
CN107679222B (en) * | 2017-10-20 | 2020-05-19 | Oppo广东移动通信有限公司 | Picture processing method, mobile terminal and computer readable storage medium |
CN108229322B (en) * | 2017-11-30 | 2021-02-12 | 北京市商汤科技开发有限公司 | Video-based face recognition method and device, electronic equipment and storage medium |
US11803919B2 (en) * | 2017-12-05 | 2023-10-31 | International Business Machines Corporation | Dynamic collection and distribution of contextual data |
CN108038176B (en) * | 2017-12-07 | 2020-09-29 | 浙江大华技术股份有限公司 | Method and device for establishing passerby library, electronic equipment and medium |
US20190332848A1 (en) * | 2018-04-27 | 2019-10-31 | Honeywell International Inc. | Facial enrollment and recognition system |
KR102035644B1 (en) * | 2018-07-09 | 2019-10-23 | 주식회사 필로시스 | Blood glucose measurement device and method to determine blood glucose unit automatically |
US11854303B1 (en) * | 2018-08-15 | 2023-12-26 | Robert William Kocher | Selective Facial Recognition Assistance method and system (SFRA) |
US11336968B2 (en) | 2018-08-17 | 2022-05-17 | Samsung Electronics Co., Ltd. | Method and device for generating content |
CN109597903B (en) * | 2018-11-21 | 2021-12-28 | 北京市商汤科技开发有限公司 | Image file processing apparatus and method, file storage system, and storage medium |
US10311334B1 (en) * | 2018-12-07 | 2019-06-04 | Capital One Services, Llc | Learning to process images depicting faces without leveraging sensitive attributes in deep learning models |
JP7135200B2 (en) * | 2019-03-26 | 2022-09-12 | 富士フイルム株式会社 | Image processing device, image processing method and image processing program |
JP7204596B2 (en) * | 2019-06-28 | 2023-01-16 | 富士フイルム株式会社 | Image processing device, image processing method, image processing program, and recording medium storing the program |
CN110796091B (en) * | 2019-10-30 | 2023-08-01 | 浙江易时科技股份有限公司 | Sales exhibition hall passenger flow batch statistics based on face recognition technology and assisted by manual correction |
US11893795B2 (en) | 2019-12-09 | 2024-02-06 | Google Llc | Interacting with visitors of a connected home environment |
US20210390250A1 (en) * | 2020-06-15 | 2021-12-16 | Canon Kabushiki Kaisha | Information processing apparatus |
US11244169B2 (en) | 2020-06-15 | 2022-02-08 | Bank Of America Corporation | System for executing multiple events based on video data extraction and evaluation |
US11108996B1 (en) | 2020-07-28 | 2021-08-31 | Bank Of America Corporation | Two-way intercept using coordinate tracking and video classification |
US20220198861A1 (en) * | 2020-12-18 | 2022-06-23 | Sensormatic Electronics, LLC | Access control system screen capture facial detection and recognition |
WO2022204153A1 (en) * | 2021-03-22 | 2022-09-29 | Angarak, Inc. | Image based tracking system |
WO2023084709A1 (en) * | 2021-11-11 | 2023-05-19 | 日本電気株式会社 | Estimation device, estimation method, and program |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5926816A (en) * | 1996-10-09 | 1999-07-20 | Oracle Corporation | Database Synchronizer |
US20050225644A1 (en) * | 2004-04-07 | 2005-10-13 | Olympus Corporation | Digital camera, album managing method, album management program product, and album management program transmission medium |
US20050237567A1 (en) * | 2004-03-19 | 2005-10-27 | Canon Europa Nv | Method and apparatus for creating and editing a library of digital media documents |
US20090064031A1 (en) * | 2007-09-04 | 2009-03-05 | Apple Inc. | Scrolling techniques for user interfaces |
US20090164494A1 (en) * | 2007-12-21 | 2009-06-25 | Google Inc. | Embedding metadata with displayable content and applications thereof |
US20120314917A1 (en) * | 2010-07-27 | 2012-12-13 | Google Inc. | Automatic Media Sharing Via Shutter Click |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7945653B2 (en) * | 2006-10-11 | 2011-05-17 | Facebook, Inc. | Tagging digital media |
CA2711143C (en) * | 2007-12-31 | 2015-12-08 | Ray Ganong | Method, system, and computer program for identification and sharing of digital images with face signatures |
US20100082624A1 (en) | 2008-09-30 | 2010-04-01 | Apple Inc. | System and method for categorizing digital media according to calendar events |
US20110013810A1 (en) * | 2009-07-17 | 2011-01-20 | Engstroem Jimmy | System and method for automatic tagging of a digital image |
US8670597B2 (en) * | 2009-08-07 | 2014-03-11 | Google Inc. | Facial recognition with social network aiding |
US8503739B2 (en) * | 2009-09-18 | 2013-08-06 | Adobe Systems Incorporated | System and method for using contextual features to improve face recognition in digital images |
US8660378B2 (en) * | 2010-02-10 | 2014-02-25 | Panasonic Corporation | Image evaluating device for calculating an importance degree of an object and an image, and an image evaluating method, program, and integrated circuit for performing the same |
US9465993B2 (en) | 2010-03-01 | 2016-10-11 | Microsoft Technology Licensing, Llc | Ranking clusters based on facial image analysis |
US20110243397A1 (en) * | 2010-03-30 | 2011-10-06 | Christopher Watkins | Searching digital image collections using face recognition |
US8724908B2 (en) * | 2010-09-03 | 2014-05-13 | Adobe Systems Incorporated | System and method for labeling a collection of images |
US8824748B2 (en) * | 2010-09-24 | 2014-09-02 | Facebook, Inc. | Auto tagging in geo-social networking system |
US20120213404A1 (en) * | 2011-02-18 | 2012-08-23 | Google Inc. | Automatic event recognition and cross-user photo clustering |
AU2012225536B9 (en) * | 2011-03-07 | 2014-01-09 | Kba2, Inc. | Systems and methods for analytic data gathering from image providers at an event or geographic location |
US8918463B2 (en) * | 2011-04-29 | 2014-12-23 | Facebook, Inc. | Automated event tagging |
-
2012
- 2012-06-15 US US13/525,037 patent/US8861804B1/en active Active
-
2014
- 2014-09-05 US US14/478,585 patent/US9063956B2/en active Active
- 2014-11-12 US US14/539,709 patent/US20150074107A1/en not_active Abandoned
-
2015
- 2015-06-05 US US14/731,833 patent/US9378408B2/en active Active
-
2016
- 2016-06-20 US US15/187,189 patent/US10043059B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5926816A (en) * | 1996-10-09 | 1999-07-20 | Oracle Corporation | Database Synchronizer |
US20050237567A1 (en) * | 2004-03-19 | 2005-10-27 | Canon Europa Nv | Method and apparatus for creating and editing a library of digital media documents |
US20050225644A1 (en) * | 2004-04-07 | 2005-10-13 | Olympus Corporation | Digital camera, album managing method, album management program product, and album management program transmission medium |
US20090064031A1 (en) * | 2007-09-04 | 2009-03-05 | Apple Inc. | Scrolling techniques for user interfaces |
US20090164494A1 (en) * | 2007-12-21 | 2009-06-25 | Google Inc. | Embedding metadata with displayable content and applications thereof |
US20120314917A1 (en) * | 2010-07-27 | 2012-12-13 | Google Inc. | Automatic Media Sharing Via Shutter Click |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10438014B2 (en) * | 2015-03-13 | 2019-10-08 | Facebook, Inc. | Systems and methods for sharing media content with recognized social connections |
US10853406B2 (en) * | 2015-09-18 | 2020-12-01 | Commvault Systems, Inc. | Data storage management operations in a secondary storage subsystem using image recognition and image-based criteria |
US11321383B2 (en) | 2015-09-18 | 2022-05-03 | Commvault Systems, Inc. | Data storage management operations in a secondary storage subsystem using image recognition and image-based criteria |
US20170206197A1 (en) * | 2016-01-19 | 2017-07-20 | Regwez, Inc. | Object stamping user interface |
US10515111B2 (en) * | 2016-01-19 | 2019-12-24 | Regwez, Inc. | Object stamping user interface |
US10614119B2 (en) | 2016-01-19 | 2020-04-07 | Regwez, Inc. | Masking restrictive access control for a user on multiple devices |
US10621225B2 (en) | 2016-01-19 | 2020-04-14 | Regwez, Inc. | Hierarchical visual faceted search engine |
US10747808B2 (en) | 2016-01-19 | 2020-08-18 | Regwez, Inc. | Hybrid in-memory faceted engine |
US11093543B2 (en) | 2016-01-19 | 2021-08-17 | Regwez, Inc. | Masking restrictive access control system |
US11436274B2 (en) | 2016-01-19 | 2022-09-06 | Regwez, Inc. | Visual access code |
CN110830678A (en) * | 2019-11-14 | 2020-02-21 | 威创集团股份有限公司 | Multi-channel video signal synchronous output method, device, system and medium |
US20220092253A1 (en) * | 2020-09-18 | 2022-03-24 | Fujifilm Business Innovation Corp. | Information processing apparatus and non-transitory computer readable medium |
Also Published As
Publication number | Publication date |
---|---|
US20150269418A1 (en) | 2015-09-24 |
US10043059B2 (en) | 2018-08-07 |
US9063956B2 (en) | 2015-06-23 |
US9378408B2 (en) | 2016-06-28 |
US8861804B1 (en) | 2014-10-14 |
US20160292495A1 (en) | 2016-10-06 |
US20140376786A1 (en) | 2014-12-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20150074107A1 (en) | Storing and serving images in memory boxes | |
US9569658B2 (en) | Image sharing with facial recognition models | |
US10885380B2 (en) | Automatic suggestion to share images | |
US8194940B1 (en) | Automatic media sharing via shutter click | |
US10504001B2 (en) | Duplicate/near duplicate detection and image registration | |
US10200421B2 (en) | Systems and methods for creating shared virtual spaces | |
US8611678B2 (en) | Grouping digital media items based on shared features | |
US11580155B2 (en) | Display device for displaying related digital images | |
US20120117271A1 (en) | Synchronization of Data in a Distributed Computing Environment | |
JP2011526013A (en) | Image processing | |
JP2012530287A (en) | Method and apparatus for selecting representative images | |
JP6396897B2 (en) | Search for events by attendees | |
US9081801B2 (en) | Metadata supersets for matching images | |
US20090327857A1 (en) | System and method for providing metadata | |
US11089071B2 (en) | Symmetric and continuous media stream from multiple sources |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: JPMORGAN CHASE BANK, N.A., AS ADMINISTRATIVE AGENT Free format text: SECURITY AGREEMENT;ASSIGNOR:SHUTTERFLY, INC.;REEL/FRAME:039024/0761 Effective date: 20160610 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |