US9595258B2 - Context-based smartphone sensor logic - Google Patents

Context-based smartphone sensor logic Download PDF

Info

Publication number
US9595258B2
US9595258B2 US14/947,008 US201514947008A US9595258B2 US 9595258 B2 US9595258 B2 US 9595258B2 US 201514947008 A US201514947008 A US 201514947008A US 9595258 B2 US9595258 B2 US 9595258B2
Authority
US
United States
Prior art keywords
visual information
received audio
type
information
recognition technologies
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US14/947,008
Other versions
US20160232898A1 (en
Inventor
Tony F. Rodriguez
Yang Bai
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Digimarc Corp
Original Assignee
Digimarc Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/174,258 external-priority patent/US8831279B2/en
Priority claimed from US13/207,841 external-priority patent/US9218530B2/en
Priority claimed from US13/278,949 external-priority patent/US9183580B2/en
Priority claimed from US13/299,140 external-priority patent/US8819172B2/en
Priority to US14/947,008 priority Critical patent/US9595258B2/en
Application filed by Digimarc Corp filed Critical Digimarc Corp
Publication of US20160232898A1 publication Critical patent/US20160232898A1/en
Priority to US15/446,837 priority patent/US20170243584A1/en
Publication of US9595258B2 publication Critical patent/US9595258B2/en
Application granted granted Critical
Priority to US15/711,357 priority patent/US10199042B2/en
Priority to US16/262,634 priority patent/US10510349B2/en
Priority to US16/709,463 priority patent/US10930289B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06K9/00006
    • G06K9/00228
    • G06K9/6267
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/12Fingerprints or palmprints
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/70Multimodal biometrics, e.g. combining information from different biometric modalities
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/28Constructional details of speech recognition systems
    • G10L15/285Memory allocation or algorithm optimisation to reduce hardware requirements
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72448User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions
    • H04M1/72454User interfaces specially adapted for cordless or mobile telephones with means for adapting the functionality of the device according to specific conditions according to context-related or environment-related conditions
    • H04M1/72569
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/12Details of telephonic subscriber devices including a sensor for measuring a physical value, e.g. temperature or motion

Definitions

  • systems and methods according to the present technology use a smartphone to sense audio and/or visual information, and provide same to a first classifier module.
  • the first classifier module characterizes the input audio-visual stimuli by type (e.g., music, speech, silence, video imagery, natural scene, face, etc.).
  • a second classifier module processes other context information (which may include the output from the first classifier module), such as of day, day of week, location, calendar data, clock alarm status, motion sensors, Facebook status, etc., and outputs data characterizing a device state type, or scenario.
  • a control rule module then issues control signals to one or more content recognition modules in accordance with the outputs from the two classifier modules.
  • control signals can simply enable or disable the different recognition modules. Additionally, if a recognition module is enabled, the control signals can establish the frequency, or schedule, or other parameter(s), by which the module performs its recognition functions.
  • Such arrangements conserve battery power, by not attempting operations that are unneeded or inappropriate to the context. Moreover, they aid other smartphone operations, since processing resources are not diverted to the idle recognition operations.
  • FIG. 1 shows an illustrative embodiment that incorporates certain aspects of the present technology.
  • FIG. 2 shows a few of the content recognition modules that may be used in the FIG. 1 embodiment.
  • FIG. 3 is a block diagram of a process employing aspects of the present technology.
  • FIG. 4 is a block diagram of an apparatus employing aspects of the present technology.
  • FIG. 5 is an event controller table illustrating, for one embodiment, how different audio recognition agents are activated based on audio classification data.
  • FIG. 6 is a flow chart illustrating, for one embodiment, how different audio recognition agents are activated based on audio classification data.
  • FIG. 7 is an event controller table illustrating, for one embodiment, how different image recognition agents are activated based on outputs from light and motion sensors, and image classification data.
  • FIG. 8 is a flow chart illustrating, for one embodiment, how different image recognition agents are activated based on outputs from light and motion sensors, and image classification data.
  • an illustrative embodiment 10 that incorporates certain aspects of the present technology includes one or more microphones 12 , cameras 14 , audio-visual classifier modules 16 , second classifier modules 18 , control rule modules 20 , and content recognition modules 22 . These components may all be included in a smartphone. Alternatively, they may be distributed between different locations and/or different devices (including the cloud).
  • One suitable smartphone is the Apple iPhone 4 device, which includes two cameras (one front-facing and one rear-facing), and two microphones.
  • the audio-visual classifier module(s) 16 processes the data captured by the microphone(s) and/or camera(s), and classifies such audio-visual content by type.
  • classification is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs.
  • the individual observations may be analyzed into a set of quantifiable properties, known variously as variables, features, etc. These properties may be categorical (e.g. “A”, “B”, “AB” or “O”, for blood type), ordinal (e.g. “large”, “medium” or “small”), etc.
  • a familiar (although sometimes difficult) classification problem is identifying email as spam or not-spam.
  • An algorithm or procedure that implements classification is known as a classifier.
  • classification is performed based on a training set of data comprising observations (or instances) whose category membership is known.
  • Classification in this sense, is regarded as an instance of supervised machine learning, i.e., learning where a training set of correctly-identified observations is available.
  • the corresponding unsupervised procedure is known as clustering (or cluster analysis), and involves grouping data into categories based on some measure of inherent similarity (e.g. the distance between instances, considered as vectors in a multi-dimensional vector space).
  • classification is regarded as including clustering.
  • One illustrative classifier module 16 is an audio classifier, which categorizes input stimulus as speech, music, background/indeterminate, or silence. For the first three, the module also classifies the volume of the audio, as loud, mid-level, or quiet.
  • a simple embodiment activates different content recognition modules, in accordance with the output of the audio classifier, as follows:
  • the audio classifier module classifies the sensed audio as “silent” or “quiet background,” all three of the detailed content recognition modules are controlled to “off.” If the sensed audio is classified as music, the system activates the Nielsen audio watermark detector, and the Gracenote fingerprint engine, but leaves the Nuance speech recognition engine off.
  • the audio watermark detector is activated, as is the speech recognition engine, but no fingerprint calculations are performed.
  • the audio classifier identifies a loud or mid-level acoustic background, but is unable to further classify its type, the audio watermark detector, the fingerprint engine, and the speech recognition engine are all activated.
  • Second classifier module 18 outputs device state type data, or scenario identification data, in accordance with context information.
  • This context information may include the classification of the audio and/or visual environment (i.e., by the audio-visual classifier module(s) 16 , as shown by the dashed line in FIG. 1 ), and typically includes other information.
  • This other context information can include, but is not limited to, time of day, day of week, location, calendar data, clock alarm status, motion and orientation sensor data, social network information (e.g., from Facebook), etc.
  • Table II which expands the Table I information to include certain device state types as determined by the second classifier module (i.e., “away from office after work hours” and “at office during work hours”):
  • Table II The first five rows of Table II are identical to Table I. They detail how the different modules are controlled when the user is away from the office, after work hours, given the noted audio environments.
  • the final set of rows is different. These correspond to the device state type of being at the user's office, during work hours. As can be seen, only the speech recognition engine is ever activated in this context (i.e., when speech or loud background audio is sensed); the other modules are left idle regardless of the audio environment.
  • the second classifier module 18 uses inputs such as time of day data and GPS data, in connection with reference data.
  • This reference data establishes—for the particular smartphone user—the times of day that should be classified as work hours (e.g., 8 am-5 pm, Monday-Friday), and the location that should be classified as the office location (e.g., 45.4518° lat, ⁇ 122.7932° long, +/ ⁇ 0.0012 degrees).
  • Table IIIA shows a more detailed scenario, as classified by the second classifier module 18 :
  • Scenario 1 This confluence of circumstances is classed by the second classifier module as “Scenario 1.” It corresponds to the scenario in which the user is probably asleep (it is before 6:30 am on a weekday, and the alarm is set for 6:30; the smartphone is stationary in a quiet, dark environment).
  • the control rules 20 associated with Scenario 1 cause all of the content recognition modules to be inactive.
  • Scenario 2 corresponds to the user after waking up and before leaving home.
  • the rules include instructions appropriate to this interval—while the user may be watching a morning news program on television, listening to the radio, or talking with a spouse.
  • the Nielsen watermark detector is active, to allow the user to link to additional web content about something discussed on the television.
  • the fingerprint engine is also active, so that the user might identify an appealing song that airs on the radio. Speech recognition may also be enabled, so that the spouse's verbal instructions to pick up ketchup, grapes, tin foil and postage stamps on the way home are transcribed for later reference.
  • the user's smartphone also includes various visual content recognition capabilities, including facial recognition.
  • the control rules specify that, in Scenario 2, facial recognition is disabled—as the user isn't expected to need prompting to recall faces of anyone encountered at home this early.
  • Scenario 3 corresponds to the user's drive to work. No television audio is expected in this environment, so the Nielsen watermark detector is disabled. Song recognition and transcription of news from talk radio, however, may be helpful, so the fingerprint and speech recognition engines are enabled. Again, facial recognition is disabled.
  • a different user may take the bus to work, instead of drive a car.
  • Second scenario control rules for this user may be different. Without a car radio, song recognition is unneeded, so the fingerprint engine is disabled. However, the user sometimes overhears amusing conversations on the bus, so speech recognition is enabled so that any humorous dialog may be shared with work-mates. Occasionally, the user sees someone she should recognize on the bus—a parent of a child's soccer teammate, for example—but be unable to recall the name in this different environment. To prepare for this eventuality, the smartphone's facial recognition capability is loaded into memory and ready for operation, but does not process a frame of camera imagery until signaled by the user. (The signal may comprise the user holding the phone in a predetermined pose and saying the word “who.”)
  • the smartphone is apparently lying face-up on a surface in the quiet work environment.
  • the corresponding control rules specify that all recognition modules are disabled.
  • the audio classifier indicates a change in audio environment—to mid-level or loud background sound, or speech
  • the rules cause the phone to enable the speech recognition module. This provides the user with a transcribed record of any request or information she is given, or any instruction she issues, so that it may be referred-to later.
  • Speech recognition can raise privacy concerns in some situations, including a work setting. Accordingly, the control rules cause the speech recognition module to issue an audible “beep” every thirty seconds when activated at work, to alert others that a recording is being made. In contrast, no “beep” alert is issued in the previously-discussed scenarios, because no recording of private third party speech is normally expected at home or in the car, and there is likely no expectation of privacy for speech overheard in a bus.
  • Another datum of context that is processed by the illustrated second classifier module 18 is the number and identity of people nearby. “Nearby” may be within the range of a Bluetooth signal issued by a person's cell phone—typically 30 feet or less. Relative distance within this range can be assessed by strength of the Bluetooth signal, with a strong signal indicating, e.g., location within ten feet or less (i.e., “close”). Identity can be discerned—at least for familiar people—by reference to Bluetooth IDs for their known devices. Bluetooth IDs for devices owned by the user, family members, workmates, and other acquaintances, may be stored with the control rules to help discriminate well known persons from others.
  • the rules can provide that speech recognition—if enabled—is performed without alert beeps, if the user is apparently solitary (i.e., no strong Bluetooth signals sensed, or only transitory signals—such as from strangers in nearby vehicles), or if the user is in the presence only of family members. However, if an unfamiliar strong Bluetooth signal is sensed when speech recognition is enabled, the system can instruct issuance of periodic alert beeps.
  • the user's phone can present a user interface screen allowing the user to store this previously-unrecognized Bluetooth identifier.
  • This UI allows the user to specify the identifier as corresponding to a family member, or to associate more particular identification information with the identifier (e.g., name and/or relationship). By such arrangement, beeping when not warranted is readily curtailed, and avoided when such circumstance recurs in the future.
  • Scenario 5 a work meeting—is contextually similar to Scenario 4, except the audio classifier reports mid-level background audio, and the phone's location is in a meeting room.
  • the speech recognition module is enabled, but corporate data retention policies require that transcripts of meetings be maintained only on corporate servers, so that they can be deleted after a retention period (e.g., 12 months) has elapsed.
  • the control rules module 20 complies with this corporate policy, and immediately transmits the transcribed speech data to a corporate transcriptions database for storage—keeping no copy. Alert beeps are issued as a courtesy reminder of the recording.
  • the rules cause the phone to beep only once every five minutes, instead of once every 30 seconds, to reduce the beeps' intrusiveness.
  • the volume of the beeps can be reduced, based on a degree of social relationship between the user and the other sensed individual(s)—so that the beeps are louder when recording someone who is only distantly or not at all socially related to the user.
  • Rules for face recognition in scenario 5 can vary depending on whether people sensed to be close-by are recognized by the user's phone. If all are recognized, the facial recognition module is not activated. However, if one or more close-by persons are not in the user's just-noted “friends” list (or within some more distant degree of relationship in a social network), then facial recognition is enabled as before—in an on-demand (rather than free-running) mode. (Alternatively, a different arrangement can be employed, e.g., with facial recognition activated if one or persons who have a certain type of social network linkage to the user, or absence thereof, are sensed to be present.)
  • Scenario 6 finds the user in a subway, during the noon hour.
  • the rules may be like those noted above for the bus commute.
  • radio reception underground is poor.
  • any facial recognition operation consults only facial eigenface reference data stored on the phone—rather than consulting the user's larger collection of Facebook or Picasa facial data, which are stored on cloud servers.
  • Scenario 7 corresponds to a Friday evening birthday party. Lots of unfamiliar people are present, so the rules launch the facial recognition module in free-running mode—providing the user with the names of every recognized face within the field-of-view of any non-dark camera.
  • This module relies on the user's Facebook and Picasa facial reference data stored in the cloud, as well as such data maintained in Facebook accounts of the user's Facebook friends. Speech recognition is disabled. Audio fingerprinting is enabled and—due to the party context—the phone has downloaded reference fingerprints for all the songs on Billboard's primary song lists (Hot 100, Billboard 200, and Hot 100 Airplay). Having this reference data cached on the phone allows much quicker operation of the song recognition application—at least for these 200+ songs.
  • Fingerprint computation, watermark detection, and speech/facial recognition are computationally relatively expensive (“computationally heavy”). So are many classification tasks (e.g., speech/music classification). It is desirable to prevent such processes from running at a 100% duty cycle.
  • One approach is to let the user decide when to run one or more heavy modules—with help from the output of one or more computationally light detectors. Adding additional steps to assess the signal quality prior to running one or more heavy detectors is another approach.
  • a simple classifier e.g., a quietness classifier
  • a simple classifier e.g., a quietness classifier
  • Such module may indicate that there is a sudden change in the environment from a quiet state.
  • Rules may call for activation of one or more heavy classifiers to determine whether the new audio environment is music or speech.
  • the system may present a display screen with a “Confirm to Proceed” button that the user taps to undertake the classification. (There can also be an “Ignore” button.
  • the system can have a default behavior, e.g., “Ignore,” if the user makes no selection within a pre-defined interval, such as ten seconds.)
  • the user response to such prompts can be logged, and associated with different context information (including the sensitivity of the quietness classifier). Over time, this stored history data can be used to predict the circumstances in which the user instructs the heavy classifier to proceed. Action can then be taken based on such historical precedent, rather than always resorting to a user tap.
  • the system can be self-learning, based on user interaction. For example, when a quietness classifier detects a change in loudness of amount “A,” it asks for the user's permission to enable a heavier classifier (for example, music versus speech classifier) or detector (e.g., watermark detector). If the user agrees, then this “A” level of loudness change is evidently at least sometimes of interest to the user. However, if over time, it becomes evident that the user uniformly refuses to activate the heavy classifier when the loudness changes by amount “A,” then the classifier can reset its threshold accordingly, and not ask the user for permission to activate the heavy module unless the loudness increases by “B” (where B>A). The quietness classifier thus learns to be less sensitive.
  • a heavier classifier for example, music versus speech classifier
  • detector e.g., watermark detector
  • the quietness classifier learns to be more sensitive.
  • FIG. 3 shows an arrangement using the foregoing principles.
  • a microphone provides an ambient audio signal to a simple classifier, which produces an output based on a gross classification (e.g., silence or sound), based on a threshold audio level. If the classifier module switches from “silence” to “sound,” it causes the smartphone to present a user interface (UI) asking the user whether the system should invoke complex processing (e.g., speech recognition, speech/music classification, or other operation indicated by applicable rules). The system then acts in accordance with the user's instruction.
  • UI user interface
  • the current context is also stored in such history, as provided by a context classifier.
  • the user history alone, may provide instructions as to how to respond in a given situation—without the need to ask the user.
  • Another approach is to employ an additional classifier, to decide whether the current audio samples have a quality that merits further classification (i.e., with a heavy classifier). If the quality is judged to be insufficient, the heavy classifier is not activated (or is de-activated).
  • Information-bearing signals such as speech and music—are commonly characterized by temporal variation, at least in spectral frequency content, and also generally in amplitude, when analyzed over brief time windows (e.g., 0.5 to 3 seconds).
  • An additional classifier can listen for audio signals that are relatively uniform in spectral frequency content, and/or that are relatively uniform in average amplitude, over such a window interval.
  • the signal may be regarded as interfering noise that unacceptably impairs the signal-to-noise ratio of the desired audio.
  • the system discontinues heavy module processing until the interfering signal ceases.
  • the just-discussed classifier detects the interval during which the jack-hammer is operated, and interrupts heavy audio processing during such period.
  • Such classifier may similarly trigger when a loud train passes, or an air compressor operates, or even when a telephone rings—causing the system to change from its normal operation in these circumstances.
  • Lu et al Another simple classifier relies on principles noted in Lu et al, SpeakerSense: Energy Efficient Unobtrusive Speaker Identification on Mobile Phones, Pervasive Computing Conf., 2011. Lu et al use a combination of signal energy (RMS) and zero crossing rate (ZCR) to distinguish human speech from other audio. While Lu et al use these parameters to identify speech, they can also be used to identify information-bearing signals more generally. (Or, stated otherwise, to flag audio passages that are likely lacking information, so that heavy processing modules can be disabled.)
  • RMS signal energy
  • ZCR zero crossing rate
  • the additional classifier works after the detection of “sound change,” audio samples prior to the “sound change” can be used as an approximation of the background noise, and the audio sample after the “sound change” can be used as the background noise plus the useful signal. This gives a crude signal-to-noise ratio.
  • the additional classifier can keep heavy modules in an idle state until this ratio exceeds a threshold value (e.g., 10 dB).
  • Still another additional classifier to indicate the likely absence of an information-bearing signal—simply looks at a ratio of frequency components.
  • the presence of high frequency signal components above a threshold amplitude is an indication of audio information.
  • a ratio of energy in high frequency components e.g., above 2 KHz
  • energy in low frequency components e.g., below 500 Hz
  • One or more microphones provides a sensed audio signal to an audio screening classifier 30 (i.e., the “additional” classifier of the foregoing discussion).
  • the microphone audio is optionally provided to a speech/music audio classifier 16 (as in FIG. 1 ) and to several heavy audio detector modules (e.g., watermark detector, speech recognition, etc.).
  • the output of the audio screening classifier provides enable/disable control signals to the different heavy detectors.
  • the audio screening classifier 30 provides the same control signal to all the heavy detectors, but in practical implementation, different controls signals may be generated for different detectors.
  • the control signals from the audio screening classifier serve to disable the heavy detector(s), based on the audio sensed by the microphone.
  • a context classifier 18 which operates like the second classifier module of FIG. 1 . It outputs signals indicating different context scenarios. These output data are provided to a control rules module 20 , which controls the mode of operation of the different heavy detector based on the identified scenario.
  • FIG. 4 arrangements shows control of heavy detector modules, heavy classifier modules can be controlled by the same type of arrangement.
  • Visual image classifiers e.g., facial recognition systems
  • imagery having significant spatial variation in luminance (contrast/intensity) and/or hue (color/chrominance). If frames of imagery appear that are lacking in such variations, any heavy image processing module that would otherwise be operating should suspend its operation.
  • a classifier can look for a series of image frames characterized by luminance or hue variation below a threshold, and interrupt heavy visual processing when such scene is detected.
  • heavy visual processes are suspended when the user points a camera to a blank wall, or to the floor.
  • uch action may also be taken based on smartphone orientation, e.g., with facial recognition only operative when the smartphone is oriented with its camera axis within 20 degrees of horizontal.
  • Other threshold values can, of course, be used.
  • a simple classifier can examine frame focus (e.g., by known metrics, such as high frequency content and contrast measures, or by a camera shake metric—provided by the phone's motion sensors), and disable facial recognition if the frame is likely blurred.
  • Facial recognition can also be disabled if the subject is too distant to likely allow a correct identification. Thus, for example, if the phone's autofocus system indicates a focal distance of ten meters or more, facial recognition needn't be engaged.
  • One technique relies on the smartphone's calendar app.
  • the user's calendar, and phone clock indicate the user is at a meeting
  • other participants in the user's proximity can be identified from the meeting participant data in the calendar app.
  • location data which is short range-broadcast from the phone (or published from the phone to a common site), and used to indicate co-location with other phones.
  • the location data can be derived from known techniques, including GPS, WiFi node identification, etc.
  • a related approach relies on acoustic emitters that introduce subtle or inaudible background audio signals into an environment, which can be indicative of location.
  • Software in a microphone-equipped device e.g., a smartphone app
  • can listen for such signal e.g., above or below a range of human hearing, such as above 15-20 KHz
  • the published information can include information conveyed by the sensed signal, e.g., identifying the emitting device or its owner, the device location and/or other context, etc.).
  • the published information can also include information associated with the receiving device (e.g., identifying the device or its owner, the device location and/or other context, etc.). This allows a group of phones near each emitter to be identified. (Related technology is employed by the Shopkick service, and is detailed in patent publication US20110029370.)
  • Bluetooth is presently preferred because—in addition to identifying nearby people, it also provides a communication channel with nearby phones. This enables the phones to collaborate in various tasks, including speech recognition, music fingerprinting, facial recognition, etc. For example, plural phones can exchange information about their respective battery states and/or other on-going processing tasks. An algorithm is then employed to select one phone to perform a particular task (e.g., the one with the most battery life remaining is selected to perform watermark decoding or facial recognition). This phone then transmits the results of its task—or related information based thereon—to the other phones (by Bluetooth or otherwise).
  • 3D image modeling based on camera data from two or more different phones, each with a different view of a subject.
  • a particular application is facial recognition, where two or more different views of a person allow a 3D facial model to be generated. Facial recognition can then be based on the 3D model information—yielding a more certain identification than 2D facial recognition affords.
  • a phone processes ambient audio/visual stimulus in connection with phone-specific information, allowing different phones to provide different results.
  • phone-specific information includes history, contacts, computing context, user context, physical context, etc. See, e.g., published application 20110161076 and copending application Ser. No. 13/174,258, filed Jun. 30, 2011 (now U.S. Pat. No. 8,831,279).
  • image processing different phones may have better or worse views of the subject.
  • collaborating phones can send the audio/imagery they captured to one or more other phones for processing.
  • a phone having Facebook access to useful facial recognition data may not be the phone with the best view of a person to be identified. If plural phones each captures data, and shares such data (or information based thereon, e.g., eigenface data) to the other phones, results may be achieved that are better than any phone—by itself—could manage.
  • devices may communicate other than by Bluetooth.
  • NFC and WiFi are two such alternatives.
  • Bluetooth was also noted as a technique for determining that a user is in a vehicle. Again, other arrangements can be employed.
  • GPS can establish that the user is following established roads, and is moving at speeds above that associated with walking or biking.
  • terrain elevation can be considered. If the terrain is generally flat, or if the traveler is going uphill, a sustained speed of more than 20 mph may distinguish motorized transport from bicycling. However, if the user is following a road down a steep downhill incline, then a sustained speed of more than 35 mph may be used to establish motorized travel with certainty.
  • two or more phones report, e.g., by a shared short-range context broadcast, that they are each following the same geo-location-track, at the same speed, then the users of the two phones can conclude that they are traveling on the same conveyance—whether car, bus, bike, etc.
  • the cloud can serve as a recipient for such information reported by the smartphones, can make determinations, e.g., about correlation between devices.
  • the information shared may be of such a character (e.g., acceleration, captured audio) that privacy concerns do not arise—given the short range of transmission involved.
  • OCR optical character recognition
  • barcode decoding
  • a user-facing camera can be used to assess emotions of a user (or user response to information presented on the smartphone screen), and tailor operation of the phone—including use of the other camera—accordingly.
  • a user-facing camera can also detect the user's eye position. Operation of the phone can thereby be controlled. For example, instead of switching between “portrait” and “landscape” display modes based on the phone's position sensors, this screen display mode can be controlled based on the orientation of the user's eyes.
  • this screen display mode can be controlled based on the orientation of the user's eyes.
  • the phone can operate its display in the “portrait” mode. If the user turns the phone ninety degrees (i.e., so that its long axis is parallel to the axis between the users' eyes), the phone switches its display mode to “landscape.”
  • the screen mode switches to follow the relative orientation of the axis between the user's eyes, relative to the screen axis. (That is, if the long axis of the phone is parallel with the axis between the user's eyes, landscape mode is used; and vice versa.)
  • the phone is equipped with stereo cameras (i.e., two cameras with fields of view that overlap)
  • the two views can be used for distance determination to any point in the frame (i.e., range finding).
  • the distance information can be used by the phone processor to guide the user to move the phone closer to, or further from, the intended subject in order to achieve best results.
  • a phone may seek to identify an audio scene by reference to sensed audio.
  • a meeting room scene may be acoustically characterized by a quiet background, with distinguishable human speech, and with occasional sound source transition (different people speaking alternatively).
  • a home scene with the user and her husband may be acoustically characterized by a mid-level background audio (perhaps music or television), and by two different voices speaking alternately.
  • a crowded convention center may be characterized by a high-level background sound, with many indistinguishable human voices, and occasionally the user's voice or another.
  • two or more smartphones can act and cooperate in different ways. For example, if the scene has been identified as a meeting, the user's phone can automatically check-in for the room, indicating that the meeting room is occupied. (Calendar programs are often used for this, but impromptu meetings may occupy rooms without advance scheduling. The smartphone can enter the meeting on the calendar—booking the room against competing reservations—after the meeting has started.)
  • the phone may communicate with a laptop or other device controlling a Powerpoint slide presentation, to learn the number of slides in the deck being reviewed, and the slide currently being displayed.
  • the laptop or the phone can compute how quickly the slides are being advanced, and extrapolate when the meeting will conclude. (E.g., if a deck has 30 slides, and it has taken 20 minutes to get through 15 slides, a processor can figure it will take a further 20 minutes to get through the final 15 slides. Adding 10 minutes at the end for a wrap-up discussion, it can figure that the meeting will conclude in 30 minutes.)
  • This information can be shared with participants, or posted to the calendar app to indicate when the room might become available.
  • the two phones can exchange domestic information (e.g., shopping list information, social calendar data, bills to be paid soon, etc.).
  • domestic information e.g., shopping list information, social calendar data, bills to be paid soon, etc.
  • a phone can initiate automatic electronic business card exchange (e.g., v-card), if it senses the user having a chat with another person, and the phone does not already have contact information for the other person (e.g., as identified by Bluetooth-indicated cell phone number, or otherwise).
  • automatic electronic business card exchange e.g., v-card
  • the user's phone might also check the public calendars of people with whom the user talks, to identify those with similar transportation requirements (e.g., a person whose flight departs from the same airport as the user's departing flight, with a flight time within 30 minutes of the user's flight). Such information can then be brought to the attention of the user, e.g., with an audible or tactile alert.
  • Tasks can be referred to the cloud based on various factors.
  • An example is use cloud processing for “easy to transmit” data (i.e., small size) and “hard to calculate” tasks (i.e., computationally complex). Cloud processing is often best suited for tasks that don't require extensive local knowledge (e.g., device history and other information stored on the device).
  • a traveler flying to San Francisco for a conference who needs to commute to a conference center hotel downtown.
  • the user's phone On landing at the airport, the user's phone sends the address of the downtown hotel/conference center to a cloud server.
  • the cloud server has knowledge of real-time traffic information, construction delays, etc.
  • the server calculates the optimal route under various constraints, e.g., shortest-time route, shortest-distance route, most cost-effective route, etc. If the user arrives at the airport only 20 minutes before the conference begins, the phone suggests taking a taxi (perhaps suggesting sharing a taxi with others that it senses have the same destination—perhaps others who also have a third-party trustworthiness score exceeding “good”).
  • the smartphone can rely on audio and imagery collected by public sensors, such as surveillance cameras in a parking garage, mall, convention center, or a home security system. This information can be part of the “big computation” provided by cloud processing. Or the data can be processed exclusively by the smartphone, such as helping the user find where she parked her yellow Nissan Leaf automobile in a crowded parking lot.
  • information from the user's social networking accounts can be used as input to the arrangements detailed herein (e.g., as context information).
  • information from the accounts of people that the user encounters e.g., at work, home, conferences, etc.
  • information output from the detailed arrangements can be posted automatically to the user's social networking account(s).
  • facial recognition has a number of uses.
  • One, noted above, is as a memory aid—prompting the user with a name of an acquaintance.
  • the user's smartphone may broadcast certain private information only if it recognizes a nearby person as a friend (e.g., by reference to the user's list of Friends on Facebook).
  • Facial recognition can also be used to tag images of a person with the person's name and other information.
  • a user's smartphone broadcasts one or more high quality facial portraits of the user, or associated eigenface data.
  • Another smartphone user can snap a poor picture of the user. That smartphone compares the snapped image with high quality image data (or eigenface data) received over Bluetooth from the user, and can confirm that the poor picture and the received image data correspond to the same individual.
  • the other smartphone uses the received image data in lieu of the poor picture, e.g., for facial recognition, or to illustrate a Contacts list, or for any other purpose where the user's photo might be employed.
  • FIG. 5 shows an event controller table for another audio embodiment, indicating how two digital watermark decoders (one tailored for watermarks commonly found in music, another tailored for watermarks commonly found in broadcast speech) are controlled, based on classifier data categorizing input audio as likely silence, speech, and/or music.
  • FIG. 6 shows a corresponding flow chart.
  • FIG. 7 shows an event controller table for another embodiment—this one involving imagery.
  • This arrangement shows how different recognition modules (1D barcode, 2D barcode, image watermark, image fingerprint, and OCR) are controlled in accordance with different sensor information.
  • Sensors can encompass logical sensors, such as classifiers.
  • the system includes a light sensor, and a motion sensor. Additionally, one or more image classifiers outputs information identifying the imagery as likely depicting text, a 1D barcode, or a 2D barcode.
  • the image watermark decoding module and the image fingerprint module, are activated based on certain combinations of outputs from the classifier(s) (e.g., when none or all of the three types of classified images is identified).
  • FIG. 8 shows a corresponding flow chart.
  • Audio classification problem is often termed as content-based classification/retrieval, or audio segmentation. There are two basic issues in this work: feature selection and classifier selection.
  • Audio understanding can be based on features in three layers: low-level acoustic characteristics, intermediate-level audio signatures associated with different sounding objects, and high level semantic models of audio in different scene classes”; and “classification based on these low-level features alone may not be accurate, but the error can be addressed in a higher layer by examining the structure underlying a sequence of continuous audio clips.”
  • Lu in [9] used low-level audio features, including 8 order MFCCs and several other perceptual features, as well as kernel SVM (support vector machine) as the classifier in a cascaded scheme.
  • the work in [10] also included perceptual features and used different classifiers in a cascaded classification scheme, including k-NN, LSP VQ and rule based methods (for smoothing). In this paper they used dynamic feature sets (use different features) for classifying different classes.
  • Arrangements particularly focused on speech/music discrimination include [19] and [20].
  • Exemplary classifiers also include those detailed in patent publications 20020080286 (British Telecomm), 20020080286 (NEC), 20020080286 (Philips), 20030009325 (DeutscheInstitut), 20040210436 (Microsoft), 20100257129 and 20120109643 (Google), and U.S. Pat. No. 5,712,953 (Hewlett-Packard).
  • head-worn devices e.g., Google Glass goggles
  • other unobtrusive sensor platforms will eventually replace today's smartphones.
  • the present technology can be used with such other forms of devices.
  • smartphone should be construed to encompass all such devices, even those that are not strictly-speaking cellular, nor telephones.
  • each includes one or more processors, one or more memories (e.g. RAM), storage (e.g., a disk or flash memory), a user interface (which may include, e.g., a keypad, a TFT LCD or OLED display screen, touch or other gesture sensors, a camera or other optical sensor, a compass sensor, a 3D magnetometer, a 3-axis accelerometer, a microphone, etc., together with software instructions for providing a graphical user interface), interconnections between these elements (e.g., buses), and an interface for communicating with other devices (which may be wireless, such as GSM, CDMA, W-CDMA, CDMA2000, TDMA, EV-DO, HSDPA, WiFi, WiMax, or Bluetooth, and/or wired, such as through an Ethernet local area network, a T-1 internet connection, etc.).
  • memories e.g. RAM
  • storage e.g., a disk or flash memory
  • a user interface which may include, e.g., a key
  • the processes and system components detailed in this specification may be implemented as instructions for computing devices, including general purpose processor instructions for a variety of programmable processors, including microprocessors, graphics processing units (GPUs, such as the nVidia Tegra APX 2600), digital signal processors (e.g., the Texas Instruments TMS320 series devices), etc. These instructions may be implemented as software, firmware, etc. These instructions can also be implemented to various forms of processor circuitry, including programmable logic devices, FPGAs (e.g., the noted Xilinx Virtex series devices), FPOAs (e.g., the noted PicoChip devices), and application specific circuits—including digital, analog and mixed analog/digital circuitry.
  • FPGAs e.g., the noted Xilinx Virtex series devices
  • FPOAs e.g., the noted PicoChip devices
  • application specific circuits including digital, analog and mixed analog/digital circuitry.
  • Execution of the instructions can be distributed among processors and/or made parallel across processors within a device or across a network of devices. Transformation of content signal data may also be distributed among different processor and memory devices. References to “processors” or “modules” should be understood to refer to functionality, rather than requiring a particular form of hardware and/or software implementation.
  • Smartphones and other devices can include software modules for performing the different functions and acts.
  • Known artificial intelligence systems and techniques can be employed to make the inferences, conclusions, and other determinations noted above.
  • each device includes operating system software that provides interfaces to hardware resources and general purpose functions, and also includes application software which can be selectively invoked to perform particular tasks desired by a user.
  • Known browser software, communications software, and media processing software can be adapted for many of the uses detailed herein.
  • Software and hardware configuration data/instructions are commonly stored as instructions in one or more data structures conveyed by tangible media, such as magnetic or optical discs, memory cards, ROM, etc., which may be accessed across a network.
  • Some embodiments may be implemented as embedded systems—a special purpose computer system in which the operating system software and the application software is indistinguishable to the user (e.g., as is commonly the case in basic cell phones).
  • the functionality detailed in this specification can be implemented in operating system software, application software and/or as embedded system software.
  • Bluetooth technology to indicate proximity to, and identity of, nearby persons is illustrative only. Many alternative technologies are known to perform one or both of these functions, and can be readily substituted.

Abstract

Methods employ sensors in portable devices (e.g., smartphones) both to sense content information (e.g., audio and imagery) and context information. Device processing is desirably dependent on both. For example, some embodiments activate certain processor intensive operations (e.g., content recognition) based on classification of sensed content and context. The context can control the location where information produced from such operations is stored, or control an alert signal indicating, e.g., that sensed speech is being transcribed. Some arrangements post sensor data collected by one device to a cloud repository, for access and processing by other devices. Multiple devices can collaborate in collecting and processing data, to exploit advantages each may have (e.g., in location, processing ability, social network resources, etc.). A great many other features and arrangements are also detailed.

Description

RELATED APPLICATION DATA
This application is a divisional of application Ser. No. 13/607,095, filed Sep. 7, 2012 (now U.S. Pat. No. 9,196,028), which claims priority to provisional applications 61/538,578, filed Sep. 23, 2011, and 61/542,737, filed Oct. 3, 2011. This application is also a continuation-in-part of application Ser. No. 14/157,108, filed Jan. 16, 2014, which is a division of application Ser. No. 13/299,140, filed Nov. 17, 2011 (now U.S. Pat. No. 8,819,172), which is a continuation-in-part of international application PCT/US11/59412, filed Nov. 4, 2011 (published as WO2012061760), which claims priority to the following provisional applications: 61/471,651, filed Apr. 4, 2011, 61/479,323, filed Apr. 26, 2011, 61/483,555, filed May 6, 2011, 61/485,888, filed May 13, 2011, and 61/501,602, filed Jun. 27, 2011. International application PCT/US11/59412 is also a continuation-in-part of each of the following applications: Ser. No. 13/174,258, filed Jun. 30, 2011 (now U.S. Pat. No. 8,831,279), Ser. No. 13/207,841, filed Aug. 11, 2011 (now U.S. Pat. No. 9,218,530), and Ser. No. 13/278,949, filed Oct. 21, 2011 (now U.S. Pat. No. 9,183,580).
BACKGROUND AND SUMMARY
In published applications 20110212717, 20110161076 and 20120208592, the present assignee detailed a variety of smartphone arrangements that respond in accordance with context. The present specification expands these teachings in certain respects.
In accordance with one aspect, systems and methods according to the present technology use a smartphone to sense audio and/or visual information, and provide same to a first classifier module. The first classifier module characterizes the input audio-visual stimuli by type (e.g., music, speech, silence, video imagery, natural scene, face, etc.). A second classifier module processes other context information (which may include the output from the first classifier module), such as of day, day of week, location, calendar data, clock alarm status, motion sensors, Facebook status, etc., and outputs data characterizing a device state type, or scenario. A control rule module then issues control signals to one or more content recognition modules in accordance with the outputs from the two classifier modules.
The control signals can simply enable or disable the different recognition modules. Additionally, if a recognition module is enabled, the control signals can establish the frequency, or schedule, or other parameter(s), by which the module performs its recognition functions.
Such arrangements conserve battery power, by not attempting operations that are unneeded or inappropriate to the context. Moreover, they aid other smartphone operations, since processing resources are not diverted to the idle recognition operations.
The foregoing and other features and advantages of the present technology will be more readily apparent from the following detailed description, which proceeds with reference to the accompanying drawings.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 shows an illustrative embodiment that incorporates certain aspects of the present technology.
FIG. 2 shows a few of the content recognition modules that may be used in the FIG. 1 embodiment.
FIG. 3 is a block diagram of a process employing aspects of the present technology.
FIG. 4 is a block diagram of an apparatus employing aspects of the present technology.
FIG. 5 is an event controller table illustrating, for one embodiment, how different audio recognition agents are activated based on audio classification data.
FIG. 6 is a flow chart illustrating, for one embodiment, how different audio recognition agents are activated based on audio classification data.
FIG. 7 is an event controller table illustrating, for one embodiment, how different image recognition agents are activated based on outputs from light and motion sensors, and image classification data.
FIG. 8 is a flow chart illustrating, for one embodiment, how different image recognition agents are activated based on outputs from light and motion sensors, and image classification data.
DETAILED DESCRIPTION
Referring to FIG. 1, an illustrative embodiment 10 that incorporates certain aspects of the present technology includes one or more microphones 12, cameras 14, audio-visual classifier modules 16, second classifier modules 18, control rule modules 20, and content recognition modules 22. These components may all be included in a smartphone. Alternatively, they may be distributed between different locations and/or different devices (including the cloud).
One suitable smartphone is the Apple iPhone 4 device, which includes two cameras (one front-facing and one rear-facing), and two microphones. Another is the HTC EVO 3D, which includes stereo cameras (both rear-facing).
The audio-visual classifier module(s) 16 processes the data captured by the microphone(s) and/or camera(s), and classifies such audio-visual content by type.
As is familiar to artisans (and as explained in the Wikipedia article “Statistical classification”), classification is the problem of identifying to which of a set of categories (sub-populations) a new observation belongs. The individual observations may be analyzed into a set of quantifiable properties, known variously as variables, features, etc. These properties may be categorical (e.g. “A”, “B”, “AB” or “O”, for blood type), ordinal (e.g. “large”, “medium” or “small”), etc. A familiar (although sometimes difficult) classification problem is identifying email as spam or not-spam. An algorithm or procedure that implements classification is known as a classifier.
Classically, classification is performed based on a training set of data comprising observations (or instances) whose category membership is known. Classification, in this sense, is regarded as an instance of supervised machine learning, i.e., learning where a training set of correctly-identified observations is available. The corresponding unsupervised procedure is known as clustering (or cluster analysis), and involves grouping data into categories based on some measure of inherent similarity (e.g. the distance between instances, considered as vectors in a multi-dimensional vector space). For purposes of the present application, classification is regarded as including clustering.
One illustrative classifier module 16 is an audio classifier, which categorizes input stimulus as speech, music, background/indeterminate, or silence. For the first three, the module also classifies the volume of the audio, as loud, mid-level, or quiet.
Illustrative audio classification technologies are detailed in a later section.
A simple embodiment activates different content recognition modules, in accordance with the output of the audio classifier, as follows:
TABLE I
Nielsen Audio Gracenote Nuance Speech
Watermark Fingerprint Recognition
Audio Classification Detector Engine Engine
Silent
Quiet background
Mid-level background X X X
Loud background X X X
Music X X
Speech X X
That is, if the audio classifier module classifies the sensed audio as “silent” or “quiet background,” all three of the detailed content recognition modules are controlled to “off.” If the sensed audio is classified as music, the system activates the Nielsen audio watermark detector, and the Gracenote fingerprint engine, but leaves the Nuance speech recognition engine off.
If the sensed audio is classified as speech, the audio watermark detector is activated, as is the speech recognition engine, but no fingerprint calculations are performed.
If the audio classifier identifies a loud or mid-level acoustic background, but is unable to further classify its type, the audio watermark detector, the fingerprint engine, and the speech recognition engine are all activated.
It will thus be recognized that different combinations of recognition technologies are applied to the input content, based on the type of content indicated by the content classifier.
(The detailed recognition modules are all familiar to artisans. A brief review follows: Nielsen encodes nearly all of the television broadcasts in the United States with an audio watermark that encodes broadcast source and time data, to assist Nielsen in identifying programs for rating surveys, etc. Nielsen maintains a database that correlates source/time data decoded from broadcasts, with program names and other identifiers. Such watermark technology is detailed, e.g., in U.S. Pat. Nos. 6,968,564 and 7,006,555. Gracenote uses audio fingerprinting technology to enable music recognition. Characteristic feature data is derived from audio by a fingerprint engine, and used to query a database containing reference fingerprint data. If a match is found, associated song identification data is returned from the database. Gracenote uses fingerprint technology originally developed by Philips, detailed, e.g., in patent documents 20060075237 and 20060041753. Nuance offers popular speech recognition technology. Its SpeechMagic SDK and/or NaturallySpeaking SDK can be incorporated into embodiments of the present technology to provide speech recognition capability.)
Second classifier module 18 outputs device state type data, or scenario identification data, in accordance with context information. This context information may include the classification of the audio and/or visual environment (i.e., by the audio-visual classifier module(s) 16, as shown by the dashed line in FIG. 1), and typically includes other information.
This other context information can include, but is not limited to, time of day, day of week, location, calendar data, clock alarm status, motion and orientation sensor data, social network information (e.g., from Facebook), etc.
Consider Table II, which expands the Table I information to include certain device state types as determined by the second classifier module (i.e., “away from office after work hours” and “at office during work hours”):
TABLE II
Nielsen Nuance
Audio Gracenote Speech
Watermark Fingerprint Recognition
Device State Type Detector Engine Engine
Silent, away from office after
work hours
Quiet background, away from
office after work hours
Mid-level background, away X X X
from office after work hours
Loud background, away from X X X
office after work hours
Loud or quiet music, away X X
from office after work hours
Loud or quiet speech, away X X
from office after work hours
Silent, at office during work
hours
Quiet background, at office
during work hours
Mid-level background, at
office during work hours
Loud background, at office X
during work hours
Loud or quiet music, at office
during work hours
Loud or quiet speech, at X
office during work hours
It will be recognized that the first five rows of Table II are identical to Table I. They detail how the different modules are controlled when the user is away from the office, after work hours, given the noted audio environments.
The final set of rows is different. These correspond to the device state type of being at the user's office, during work hours. As can be seen, only the speech recognition engine is ever activated in this context (i.e., when speech or loud background audio is sensed); the other modules are left idle regardless of the audio environment.
To determine whether the user is “at office during work hours,” or “away from office after work hours,” the second classifier module 18 uses inputs such as time of day data and GPS data, in connection with reference data. This reference data establishes—for the particular smartphone user—the times of day that should be classified as work hours (e.g., 8 am-5 pm, Monday-Friday), and the location that should be classified as the office location (e.g., 45.4518° lat, −122.7932° long, +/−0.0012 degrees).
It will be recognized that this arrangement conserves battery power, by not attempting to recognize songs or television programs while the user is at work. It also aids other tasks the smartphone may be instructed to perform at work, since processing resources are not diverted to the idle recognition operations.
More typically, the smartphone considers other factors beyond those of this simple example. Table IIIA shows a more detailed scenario, as classified by the second classifier module 18:
TABLE IIIA
SENSOR SCENARIO
1
Clock before 6:30 am (M-F)
GPS home
Microphone quiet background
Ambient light sensor dark
Camera (front) dark
Camera (back) dark
Accelerometer zero movement
Alarm set at 6:30 am
Calendar earliest meeting is at 10:00 am
Facebook nothing special
Close (proximity) spouse
This confluence of circumstances is classed by the second classifier module as “Scenario 1.” It corresponds to the scenario in which the user is probably asleep (it is before 6:30 am on a weekday, and the alarm is set for 6:30; the smartphone is stationary in a quiet, dark environment). The control rules 20 associated with Scenario 1 cause all of the content recognition modules to be inactive.
The following tables show other scenarios, as classified by the second classifier module:
TABLE IIIB
SENSOR SCENARIO 2
Clock 6:30 am-7:30 am (M-F)
GPS home
Microphone mid-level background
Ambient light sensor bright
Camera (front) dark/bright
Camera (back) bright/dark
Accelerometer some movement
Alarm dismissed
Calendar earliest meeting is at 10:00 am
Facebook nothing special
Close spouse
TABLE IIIC
SENSOR SCENARIO 3
Clock 7:30 am-8:00 am (M-F)
GPS commute
Microphone loud background
Ambient light sensor dark/bright
Camera (front) dark/bright
Camera (back) dark/bright
Accelerometer some movement
Alarm dismissed
Calendar earliest meeting is at 10:00 am
Facebook nothing special
Close none
TABLE IIID
SENSOR SCENARIO 4
Clock 8:00 am-10:00 am (M-F)
GPS office
Microphone quiet background
Ambient light sensor bright
Camera (front) bright
Camera (back) dark
Accelerometer zero movement
Alarm dismissed
Calendar earliest meeting is at 10:00 am
Facebook nothing special
Close TomS (workmate); SteveH (workmate);
Unknown1; Unknown2
TABLE IIIE
SENSOR SCENARIO 5
Clock 10:00 am-10:30 am (M-F)
GPS meeting room 1
Microphone mid-level background
Ambient light sensor bright
Camera (front) bright
Camera (back) dark
Accelerometer zero movement
Alarm dismissed
Calendar meeting during 10:00 am-10:30 am
Facebook nothing special
Close RyanC (workmate); LisaM (workmate);
MeganP (workmate)
TABLE IIIF
SENSOR SCENARIO 6
Clock 12:00 noon-1:00 pm (M-F)
GPS Subway
Microphone loud background
Ambient light sensor dark
Camera (front) dark
Camera (back) dark
Accelerometer some movement
Alarm dismissed
Calendar nothing special
Facebook nothing special
Close Unknown1; Unknown2; Unknown3;
Unknown4
TABLE IIIG
SENSOR SCENARIO 7
Clock 8:00 pm-11:00 pm (F)
GPS 22323 Colchester Rd, Beaverton
Microphone loud background
Ambient light sensor bright
Camera (front) bright
Camera (back) dark
Accelerometer zero movement
Alarm nothing special
Calendar friend's B-day party 8:00 pm
Facebook friend's B-day today
Close PeterM (buddy); CarrieB (buddy); SheriS
(buddy); Unknown1 . . . Unknown 7
TABLE IIIH
SENSOR SCENARIO 8
Clock 10:00 pm-4:00 am
GPS road
Microphone any
Ambient light sensor dark
Camera (front) dark
Camera (back) dark
Accelerometer/motion >30 miles per hour
Alarm nothing special
Calendar nothing special
Facebook nothing special
Close
Scenario 2 (Table IIIB) corresponds to the user after waking up and before leaving home. The rules include instructions appropriate to this interval—while the user may be watching a morning news program on television, listening to the radio, or talking with a spouse. In particular, the Nielsen watermark detector is active, to allow the user to link to additional web content about something discussed on the television. The fingerprint engine is also active, so that the user might identify an appealing song that airs on the radio. Speech recognition may also be enabled, so that the spouse's verbal instructions to pick up ketchup, grapes, tin foil and postage stamps on the way home are transcribed for later reference.
The user's smartphone also includes various visual content recognition capabilities, including facial recognition. The control rules specify that, in Scenario 2, facial recognition is disabled—as the user isn't expected to need prompting to recall faces of anyone encountered at home this early.
Scenario 3 (Table IIIC) corresponds to the user's drive to work. No television audio is expected in this environment, so the Nielsen watermark detector is disabled. Song recognition and transcription of news from talk radio, however, may be helpful, so the fingerprint and speech recognition engines are enabled. Again, facial recognition is disabled.
A different user may take the bus to work, instead of drive a car. Second scenario control rules for this user may be different. Without a car radio, song recognition is unneeded, so the fingerprint engine is disabled. However, the user sometimes overhears amusing conversations on the bus, so speech recognition is enabled so that any humorous dialog may be shared with work-mates. Occasionally, the user sees someone she should recognize on the bus—a parent of a child's soccer teammate, for example—but be unable to recall the name in this different environment. To prepare for this eventuality, the smartphone's facial recognition capability is loaded into memory and ready for operation, but does not process a frame of camera imagery until signaled by the user. (The signal may comprise the user holding the phone in a predetermined pose and saying the word “who.”)
The confluence of sensor information detailed in Table IIID—which the second classifier module identifies as Scenario 4—corresponds to the circumstance of the user's morning work at her desk. The smartphone is apparently lying face-up on a surface in the quiet work environment. The corresponding control rules specify that all recognition modules are disabled. However, if the audio classifier indicates a change in audio environment—to mid-level or loud background sound, or speech, the rules cause the phone to enable the speech recognition module. This provides the user with a transcribed record of any request or information she is given, or any instruction she issues, so that it may be referred-to later.
Speech recognition can raise privacy concerns in some situations, including a work setting. Accordingly, the control rules cause the speech recognition module to issue an audible “beep” every thirty seconds when activated at work, to alert others that a recording is being made. In contrast, no “beep” alert is issued in the previously-discussed scenarios, because no recording of private third party speech is normally expected at home or in the car, and there is likely no expectation of privacy for speech overheard in a bus.
Another datum of context that is processed by the illustrated second classifier module 18 is the number and identity of people nearby. “Nearby” may be within the range of a Bluetooth signal issued by a person's cell phone—typically 30 feet or less. Relative distance within this range can be assessed by strength of the Bluetooth signal, with a strong signal indicating, e.g., location within ten feet or less (i.e., “close”). Identity can be discerned—at least for familiar people—by reference to Bluetooth IDs for their known devices. Bluetooth IDs for devices owned by the user, family members, workmates, and other acquaintances, may be stored with the control rules to help discriminate well known persons from others.
Returning briefly to the previous scenarios, the rules can provide that speech recognition—if enabled—is performed without alert beeps, if the user is apparently solitary (i.e., no strong Bluetooth signals sensed, or only transitory signals—such as from strangers in nearby vehicles), or if the user is in the presence only of family members. However, if an unfamiliar strong Bluetooth signal is sensed when speech recognition is enabled, the system can instruct issuance of periodic alert beeps.
(If the user's phone issues speech recognition alert beeps at home, because the user's child has a new device with an unrecognized Bluetooth identifier, the user's phone can present a user interface screen allowing the user to store this previously-unrecognized Bluetooth identifier. This UI allows the user to specify the identifier as corresponding to a family member, or to associate more particular identification information with the identifier (e.g., name and/or relationship). By such arrangement, beeping when not warranted is readily curtailed, and avoided when such circumstance recurs in the future.)
Scenario 5—a work meeting—is contextually similar to Scenario 4, except the audio classifier reports mid-level background audio, and the phone's location is in a meeting room. The speech recognition module is enabled, but corporate data retention policies require that transcripts of meetings be maintained only on corporate servers, so that they can be deleted after a retention period (e.g., 12 months) has elapsed. The control rules module 20 complies with this corporate policy, and immediately transmits the transcribed speech data to a corporate transcriptions database for storage—keeping no copy. Alert beeps are issued as a courtesy reminder of the recording. However, since all the close-by persons are recognized to be “friends” (i.e., their Bluetooth identifiers correspond to known workmates), the rules cause the phone to beep only once every five minutes, instead of once every 30 seconds, to reduce the beeps' intrusiveness. (Additionally or alternatively, the volume of the beeps can be reduced, based on a degree of social relationship between the user and the other sensed individual(s)—so that the beeps are louder when recording someone who is only distantly or not at all socially related to the user.)
Rules for face recognition in scenario 5 can vary depending on whether people sensed to be close-by are recognized by the user's phone. If all are recognized, the facial recognition module is not activated. However, if one or more close-by persons are not in the user's just-noted “friends” list (or within some more distant degree of relationship in a social network), then facial recognition is enabled as before—in an on-demand (rather than free-running) mode. (Alternatively, a different arrangement can be employed, e.g., with facial recognition activated if one or persons who have a certain type of social network linkage to the user, or absence thereof, are sensed to be present.)
Scenario 6 finds the user in a subway, during the noon hour. The rules may be like those noted above for the bus commute. However, radio reception underground is poor. Accordingly, any facial recognition operation consults only facial eigenface reference data stored on the phone—rather than consulting the user's larger collection of Facebook or Picasa facial data, which are stored on cloud servers.
Scenario 7 corresponds to a Friday evening birthday party. Lots of unfamiliar people are present, so the rules launch the facial recognition module in free-running mode—providing the user with the names of every recognized face within the field-of-view of any non-dark camera. This module relies on the user's Facebook and Picasa facial reference data stored in the cloud, as well as such data maintained in Facebook accounts of the user's Facebook friends. Speech recognition is disabled. Audio fingerprinting is enabled and—due to the party context—the phone has downloaded reference fingerprints for all the songs on Billboard's primary song lists (Hot 100, Billboard 200, and Hot 100 Airplay). Having this reference data cached on the phone allows much quicker operation of the song recognition application—at least for these 200+ songs.
Additional Information
Fingerprint computation, watermark detection, and speech/facial recognition are computationally relatively expensive (“computationally heavy”). So are many classification tasks (e.g., speech/music classification). It is desirable to prevent such processes from running at a 100% duty cycle.
One approach is to let the user decide when to run one or more heavy modules—with help from the output of one or more computationally light detectors. Adding additional steps to assess the signal quality prior to running one or more heavy detectors is another approach.
Reducing the duty cycle of a heavy module implies the possibility of missed detection, so the user should have some control over how much compromise she/he wants.
Consider a simple classifier (e.g., a quietness classifier), which simply checks the ambient audio energy within a one-second long audio frame, and compares this value to a pre-defined threshold. Such module may indicate that there is a sudden change in the environment from a quiet state. Rules may call for activation of one or more heavy classifiers to determine whether the new audio environment is music or speech. In this case, the system may present a display screen with a “Confirm to Proceed” button that the user taps to undertake the classification. (There can also be an “Ignore” button. The system can have a default behavior, e.g., “Ignore,” if the user makes no selection within a pre-defined interval, such as ten seconds.)
The user response to such prompts can be logged, and associated with different context information (including the sensitivity of the quietness classifier). Over time, this stored history data can be used to predict the circumstances in which the user instructs the heavy classifier to proceed. Action can then be taken based on such historical precedent, rather than always resorting to a user tap.
That is, the system can be self-learning, based on user interaction. For example, when a quietness classifier detects a change in loudness of amount “A,” it asks for the user's permission to enable a heavier classifier (for example, music versus speech classifier) or detector (e.g., watermark detector). If the user agrees, then this “A” level of loudness change is evidently at least sometimes of interest to the user. However, if over time, it becomes evident that the user uniformly refuses to activate the heavy classifier when the loudness changes by amount “A,” then the classifier can reset its threshold accordingly, and not ask the user for permission to activate the heavy module unless the loudness increases by “B” (where B>A). The quietness classifier thus learns to be less sensitive.
Conversely, if the user manually launches the heavy module when the quietness classifier has sensed a change in loudness too small to trigger a UI prompt to the user, this indicates that the threshold used by the quietness classifier is too high, and should be changed to a lower level. The quietness classifier thus learns to be more sensitive.
FIG. 3 shows an arrangement using the foregoing principles. A microphone provides an ambient audio signal to a simple classifier, which produces an output based on a gross classification (e.g., silence or sound), based on a threshold audio level. If the classifier module switches from “silence” to “sound,” it causes the smartphone to present a user interface (UI) asking the user whether the system should invoke complex processing (e.g., speech recognition, speech/music classification, or other operation indicated by applicable rules). The system then acts in accordance with the user's instruction.
Shown in dashed lines are additional aspects of the method that may be included. For example, the user's response—entered through the UI—is logged and added to a user history, to guide future automated responses by the system. The current context is also stored in such history, as provided by a context classifier. In some cases, the user history, alone, may provide instructions as to how to respond in a given situation—without the need to ask the user.
(It will be recognized that instead of asking the user whether to invoke the complex processing module when the context changes, the system can instead ask whether the complex processing module should not be invoked. In this case the user's inaction results in the processing module being invoked.)
Another approach is to employ an additional classifier, to decide whether the current audio samples have a quality that merits further classification (i.e., with a heavy classifier). If the quality is judged to be insufficient, the heavy classifier is not activated (or is de-activated).
Information-bearing signals—such as speech and music—are commonly characterized by temporal variation, at least in spectral frequency content, and also generally in amplitude, when analyzed over brief time windows (e.g., 0.5 to 3 seconds). An additional classifier can listen for audio signals that are relatively uniform in spectral frequency content, and/or that are relatively uniform in average amplitude, over such a window interval. If such a classifier detects such a signal, and the amplitude of such signal is stronger by a threshold amount (e.g., 3 dB) than a long-term average amplitude of the sensed audio environment (e.g., over a previous interval of 3-30 seconds), the signal may be regarded as interfering noise that unacceptably impairs the signal-to-noise ratio of the desired audio. In response to such a determination, the system discontinues heavy module processing until the interfering signal ceases.
To cite an extreme case, consider a user riding a bus that passes a construction site where a loud jack-hammer is being used. The just-discussed classifier detects the interval during which the jack-hammer is operated, and interrupts heavy audio processing during such period.
Such classifier may similarly trigger when a loud train passes, or an air compressor operates, or even when a telephone rings—causing the system to change from its normal operation in these circumstances.
Another simple classifier relies on principles noted in Lu et al, SpeakerSense: Energy Efficient Unobtrusive Speaker Identification on Mobile Phones, Pervasive Computing Conf., 2011. Lu et al use a combination of signal energy (RMS) and zero crossing rate (ZCR) to distinguish human speech from other audio. While Lu et al use these parameters to identify speech, they can also be used to identify information-bearing signals more generally. (Or, stated otherwise, to flag audio passages that are likely lacking information, so that heavy processing modules can be disabled.)
As a further alternative, since the additional classifier works after the detection of “sound change,” audio samples prior to the “sound change” can be used as an approximation of the background noise, and the audio sample after the “sound change” can be used as the background noise plus the useful signal. This gives a crude signal-to-noise ratio. The additional classifier can keep heavy modules in an idle state until this ratio exceeds a threshold value (e.g., 10 dB).
Still another additional classifier—to indicate the likely absence of an information-bearing signal—simply looks at a ratio of frequency components. Generally, the presence of high frequency signal components above a threshold amplitude is an indication of audio information. A ratio of energy in high frequency components (e.g., above 2 KHz) compared to energy in low frequency components (e.g., below 500 Hz) can serve as another simple signal-to-noise ratio. If the classifier finds that such ratio is below 3 or 10 dB, it can suspend operation of heavy modules.
Such an arrangement is shown in FIG. 4. One or more microphones provides a sensed audio signal to an audio screening classifier 30 (i.e., the “additional” classifier of the foregoing discussion). The microphone audio is optionally provided to a speech/music audio classifier 16 (as in FIG. 1) and to several heavy audio detector modules (e.g., watermark detector, speech recognition, etc.). The output of the audio screening classifier provides enable/disable control signals to the different heavy detectors. (For simplicity of illustration, the audio screening classifier 30 provides the same control signal to all the heavy detectors, but in practical implementation, different controls signals may be generated for different detectors.) The control signals from the audio screening classifier serve to disable the heavy detector(s), based on the audio sensed by the microphone.
Also shown in FIG. 4 is a context classifier 18, which operates like the second classifier module of FIG. 1. It outputs signals indicating different context scenarios. These output data are provided to a control rules module 20, which controls the mode of operation of the different heavy detector based on the identified scenario.
(While the FIG. 4 arrangements shows control of heavy detector modules, heavy classifier modules can be controlled by the same type of arrangement.)
The above-discussed principles are likewise applicable to sensing visual information. Visual image classifiers (e.g., facial recognition systems) generally work on imagery having significant spatial variation in luminance (contrast/intensity) and/or hue (color/chrominance). If frames of imagery appear that are lacking in such variations, any heavy image processing module that would otherwise be operating should suspend its operation.
Accordingly, a classifier can look for a series of image frames characterized by luminance or hue variation below a threshold, and interrupt heavy visual processing when such scene is detected. Thus, for example, heavy visual processes are suspended when the user points a camera to a blank wall, or to the floor. (Such action may also be taken based on smartphone orientation, e.g., with facial recognition only operative when the smartphone is oriented with its camera axis within 20 degrees of horizontal. Other threshold values can, of course, be used.)
Similarly, facial recognition analysis is likely wasted effort if the frame is out of focus. Accordingly, a simple classifier can examine frame focus (e.g., by known metrics, such as high frequency content and contrast measures, or by a camera shake metric—provided by the phone's motion sensors), and disable facial recognition if the frame is likely blurred.
Facial recognition can also be disabled if the subject is too distant to likely allow a correct identification. Thus, for example, if the phone's autofocus system indicates a focal distance of ten meters or more, facial recognition needn't be engaged.
While Bluetooth is one way to sense other individuals nearby, there are others.
One technique relies on the smartphone's calendar app. When the user's calendar, and phone clock, indicate the user is at a meeting, other participants in the user's proximity can be identified from the meeting participant data in the calendar app.
Another approach relies on location data, which is short range-broadcast from the phone (or published from the phone to a common site), and used to indicate co-location with other phones. The location data can be derived from known techniques, including GPS, WiFi node identification, etc.
A related approach relies on acoustic emitters that introduce subtle or inaudible background audio signals into an environment, which can be indicative of location. Software in a microphone-equipped device (e.g., a smartphone app) can listen for such signal (e.g., above or below a range of human hearing, such as above 15-20 KHz), and broadcast or publish—to a public site—information about the sensed signal. The published information can include information conveyed by the sensed signal, e.g., identifying the emitting device or its owner, the device location and/or other context, etc.). The published information can also include information associated with the receiving device (e.g., identifying the device or its owner, the device location and/or other context, etc.). This allows a group of phones near each emitter to be identified. (Related technology is employed by the Shopkick service, and is detailed in patent publication US20110029370.)
Bluetooth is presently preferred because—in addition to identifying nearby people, it also provides a communication channel with nearby phones. This enables the phones to collaborate in various tasks, including speech recognition, music fingerprinting, facial recognition, etc. For example, plural phones can exchange information about their respective battery states and/or other on-going processing tasks. An algorithm is then employed to select one phone to perform a particular task (e.g., the one with the most battery life remaining is selected to perform watermark decoding or facial recognition). This phone then transmits the results of its task—or related information based thereon—to the other phones (by Bluetooth or otherwise).
Another form of collaboration is 3D image modeling, based on camera data from two or more different phones, each with a different view of a subject. A particular application is facial recognition, where two or more different views of a person allow a 3D facial model to be generated. Facial recognition can then be based on the 3D model information—yielding a more certain identification than 2D facial recognition affords.
Yet another form of collaboration is for multiple smartphones to undertake the same task, and then share results. Different phone processes may yield results with different confidence measures, in which case the result with the highest confidence measure can be used by all phones. (Such processing can be done by processes in the cloud, instead of using the phones' own processors.)
In some applications, a phone processes ambient audio/visual stimulus in connection with phone-specific information, allowing different phones to provide different results. For example, the face of an unknown person may be identified in a Facebook account accessible to one phone, but not to others. Thus, one phone may be able to complete a task that others cannot. (Other phone-specific information includes history, contacts, computing context, user context, physical context, etc. See, e.g., published application 20110161076 and copending application Ser. No. 13/174,258, filed Jun. 30, 2011 (now U.S. Pat. No. 8,831,279). For image processing, different phones may have better or worse views of the subject.)
Relatedly, collaborating phones can send the audio/imagery they captured to one or more other phones for processing. For example, a phone having Facebook access to useful facial recognition data may not be the phone with the best view of a person to be identified. If plural phones each captures data, and shares such data (or information based thereon, e.g., eigenface data) to the other phones, results may be achieved that are better than any phone—by itself—could manage.
Of course, devices may communicate other than by Bluetooth. NFC and WiFi are two such alternatives.
Bluetooth was also noted as a technique for determining that a user is in a vehicle. Again, other arrangements can be employed.
One is GPS. Even a sporadically-executing GPS module (e.g., once every minute) can collect enough trajectory information to determine whether a user is moving in a manner consistent with vehicular travel. For example, GPS can establish that the user is following established roads, and is moving at speeds above that associated with walking or biking. (When disambiguating biking from motorized vehicle travel, terrain elevation can be considered. If the terrain is generally flat, or if the traveler is going uphill, a sustained speed of more than 20 mph may distinguish motorized transport from bicycling. However, if the user is following a road down a steep downhill incline, then a sustained speed of more than 35 mph may be used to establish motorized travel with certainty.)
If two or more phones report, e.g., by a shared short-range context broadcast, that they are each following the same geo-location-track, at the same speed, then the users of the two phones can conclude that they are traveling on the same conveyance—whether car, bus, bike, etc.
Such conclusion can similarly be made without GPS, e.g., if two or more phones report similar data from their 3D accelerometers, gyroscopes, and/or magnetometers Still further, co-conveyance of multiple users can likewise be established if two or more phones capture the same audio (e.g., as indicated by a correlation metric exceeding a threshold value, e.g., 0.9), and share this information with other nearby devices.
Again, the cloud can serve as a recipient for such information reported by the smartphones, can make determinations, e.g., about correlation between devices.
Reference was made to a short-range context broadcast. This can be effected by phones broadcasting their sensed context information (which may include captured audio) by Bluetooth to nearby devices. The information shared may be of such a character (e.g., acceleration, captured audio) that privacy concerns do not arise—given the short range of transmission involved.
While this specification focuses on audio applications, and also considers facial recognition, there are unlimited classes that might be recognized and acted-on. A few other visual classes include optical character recognition (OCR) and barcode decoding.
The presence of multiple cameras on a smartphone enables other arrangements. For example, as noted in application Ser. No. 13/212,119, filed Aug. 17, 2011 (now U.S. Pat. No. 8,564,684), a user-facing camera can be used to assess emotions of a user (or user response to information presented on the smartphone screen), and tailor operation of the phone—including use of the other camera—accordingly.
A user-facing camera can also detect the user's eye position. Operation of the phone can thereby be controlled. For example, instead of switching between “portrait” and “landscape” display modes based on the phone's position sensors, this screen display mode can be controlled based on the orientation of the user's eyes. Thus, if the user is lying in bed on her side (i.e., with a line between the pupils extending vertically), and the phone is spatially oriented in a landscape direction (with its long axis extending horizontally, parallel to the axis of the user's body), the phone can operate its display in the “portrait” mode. If the user turns the phone ninety degrees (i.e., so that its long axis is parallel to the axis between the users' eyes), the phone switches its display mode to “landscape.”
Similarly, if the user is lying on her back, and holding the phone overhead, the screen mode switches to follow the relative orientation of the axis between the user's eyes, relative to the screen axis. (That is, if the long axis of the phone is parallel with the axis between the user's eyes, landscape mode is used; and vice versa.)
If the phone is equipped with stereo cameras (i.e., two cameras with fields of view that overlap), the two views can be used for distance determination to any point in the frame (i.e., range finding). For certain visual detection tasks (e.g., watermark and barcode decoding), the distance information can be used by the phone processor to guide the user to move the phone closer to, or further from, the intended subject in order to achieve best results.
A phone may seek to identify an audio scene by reference to sensed audio. For example, a meeting room scene may be acoustically characterized by a quiet background, with distinguishable human speech, and with occasional sound source transition (different people speaking alternatively). A home scene with the user and her husband may be acoustically characterized by a mid-level background audio (perhaps music or television), and by two different voices speaking alternately. A crowded convention center may be characterized by a high-level background sound, with many indistinguishable human voices, and occasionally the user's voice or another.
Once an audio scene has been identified, two or more smartphones can act and cooperate in different ways. For example, if the scene has been identified as a meeting, the user's phone can automatically check-in for the room, indicating that the meeting room is occupied. (Calendar programs are often used for this, but impromptu meetings may occupy rooms without advance scheduling. The smartphone can enter the meeting on the calendar—booking the room against competing reservations—after the meeting has started.)
The phone may communicate with a laptop or other device controlling a Powerpoint slide presentation, to learn the number of slides in the deck being reviewed, and the slide currently being displayed. The laptop or the phone can compute how quickly the slides are being advanced, and extrapolate when the meeting will conclude. (E.g., if a deck has 30 slides, and it has taken 20 minutes to get through 15 slides, a processor can figure it will take a further 20 minutes to get through the final 15 slides. Adding 10 minutes at the end for a wrap-up discussion, it can figure that the meeting will conclude in 30 minutes.) This information can be shared with participants, or posted to the calendar app to indicate when the room might become available.
If the auditory scene indicates a home setting in the presence of a spouse, the two phones can exchange domestic information (e.g., shopping list information, social calendar data, bills to be paid soon, etc.).
In a crowded convention center scene, a phone can initiate automatic electronic business card exchange (e.g., v-card), if it senses the user having a chat with another person, and the phone does not already have contact information for the other person (e.g., as identified by Bluetooth-indicated cell phone number, or otherwise).
In the convention scene, the user's phone might also check the public calendars of people with whom the user talks, to identify those with similar transportation requirements (e.g., a person whose flight departs from the same airport as the user's departing flight, with a flight time within 30 minutes of the user's flight). Such information can then be brought to the attention of the user, e.g., with an audible or tactile alert.
Reference was made to performing certain operations in the cloud. Tasks can be referred to the cloud based on various factors. An example is use cloud processing for “easy to transmit” data (i.e., small size) and “hard to calculate” tasks (i.e., computationally complex). Cloud processing is often best suited for tasks that don't require extensive local knowledge (e.g., device history and other information stored on the device).
Consider a traveler flying to San Francisco for a conference, who needs to commute to a conference center hotel downtown. On landing at the airport, the user's phone sends the address of the downtown hotel/conference center to a cloud server. The cloud server has knowledge of real-time traffic information, construction delays, etc. The server calculates the optimal route under various constraints, e.g., shortest-time route, shortest-distance route, most cost-effective route, etc. If the user arrives at the airport only 20 minutes before the conference begins, the phone suggests taking a taxi (perhaps suggesting sharing a taxi with others that it senses have the same destination—perhaps others who also have a third-party trustworthiness score exceeding “good”). In contrast, if the user arrives a day ahead of the conference, the phone suggests taking BART—provided the user traveled with one piece of checked baggage or less (determined by reference to airline check-in data stored on the smartphone). Such route selection task is an example of “little data, big computation.”
In addition to audio and imagery from its own sensors, the smartphone can rely on audio and imagery collected by public sensors, such as surveillance cameras in a parking garage, mall, convention center, or a home security system. This information can be part of the “big computation” provided by cloud processing. Or the data can be processed exclusively by the smartphone, such as helping the user find where she parked her yellow Nissan Leaf automobile in a crowded parking lot.
While the specification has focused on analysis of audio and image data, the same principles can be applied to other data types. One is haptic data. Another is gas and chemical analysis. Related is olfactory information. (Smell sensors can be used by smartphones as a diagnostic aid in medicine, e.g., detecting biomarkers that correlate to lung cancer in a user's breath.)
Naturally, information from the user's social networking accounts (Facebook, Twitter, Foursquare, Shopkick, LinkedIn, etc.) can be used as input to the arrangements detailed herein (e.g., as context information). Likewise with public information from the accounts of people that the user encounters, e.g., at work, home, conferences, etc. Moreover, information output from the detailed arrangements can be posted automatically to the user's social networking account(s).
It will be recognized that facial recognition has a number of uses. One, noted above, is as a memory aid—prompting the user with a name of an acquaintance. Another is for user identification and/or authorization. For example, the user's smartphone may broadcast certain private information only if it recognizes a nearby person as a friend (e.g., by reference to the user's list of Friends on Facebook). Facial recognition can also be used to tag images of a person with the person's name and other information.
In some embodiments a user's smartphone broadcasts one or more high quality facial portraits of the user, or associated eigenface data. Another smartphone user can snap a poor picture of the user. That smartphone compares the snapped image with high quality image data (or eigenface data) received over Bluetooth from the user, and can confirm that the poor picture and the received image data correspond to the same individual. The other smartphone then uses the received image data in lieu of the poor picture, e.g., for facial recognition, or to illustrate a Contacts list, or for any other purpose where the user's photo might be employed.
FIG. 5 shows an event controller table for another audio embodiment, indicating how two digital watermark decoders (one tailored for watermarks commonly found in music, another tailored for watermarks commonly found in broadcast speech) are controlled, based on classifier data categorizing input audio as likely silence, speech, and/or music. FIG. 6 shows a corresponding flow chart.
FIG. 7 shows an event controller table for another embodiment—this one involving imagery. This arrangement shows how different recognition modules (1D barcode, 2D barcode, image watermark, image fingerprint, and OCR) are controlled in accordance with different sensor information. (Sensors can encompass logical sensors, such as classifiers.) In the illustrated arrangements, the system includes a light sensor, and a motion sensor. Additionally, one or more image classifiers outputs information identifying the imagery as likely depicting text, a 1D barcode, or a 2D barcode.
Note that that there is no classifier output for “image.” Everything is a candidate. Thus, the image watermark decoding module, and the image fingerprint module, are activated based on certain combinations of outputs from the classifier(s) (e.g., when none or all of the three types of classified images is identified).
Note, too, how no image recognition processing is undertaken when the system detects a dark scene, or the system detects that the imagery was captured under conditions of motion (“jerk”) that make image quality dubious.
FIG. 8 shows a corresponding flow chart.
Published application 20120208592 further details technology useful with the arrangements of FIGS. 5-8.
More on Audio Classification
Audio classification problem is often termed as content-based classification/retrieval, or audio segmentation. There are two basic issues in this work: feature selection and classifier selection.
One of the early works in this field was published in 1996 by Wold et al. [5]. He used various perceptual features (loudness, pitch, brightness, bandwidth and harmonicity) and the nearest neighbor classifier. In [6], Foote used the 13 Mel-Frequency Cepstral Coefficients (MFCCs) as audio features, and a vector quantization method for classification. In [7], Zhang and Kuo used hidden Markov models to characterize audio segments, and a hierarchical classifier is used for two-step classification. Scheirer, in [12], evaluated the properties of 13 features for classify speech and music, achieving very high accuracy (around 95% accuracy, but only for music/speech classification), especially integrating long segments of sound (2.4 seconds). Liu et al. [8] argued that “audio understanding can be based on features in three layers: low-level acoustic characteristics, intermediate-level audio signatures associated with different sounding objects, and high level semantic models of audio in different scene classes”; and “classification based on these low-level features alone may not be accurate, but the error can be addressed in a higher layer by examining the structure underlying a sequence of continuous audio clips.”
Meanwhile, in terms of calculating low-level features, [6,8] mentioned explicitly to firstly divide the audio samples into 1-second long clips and then further divide each clip into 40 non-overlapping 25-milisecond long sub-clips. The low-level features are calculated on each 25-millisecond long sub-clip and then merged through the 40 sub-clips to represent the 1-second long clip. The classification is based on the 1-second long clips. (In a 25-milisecond period, the sound signal shows a stationary property, whereas in a 1-second period, the sound signal exhibits characteristics corresponding to the categories that we want to distinguish. In these early references, and also in later years, these categories include silence, music, speech, environment sound, speech with environment sound, etc.
In the 2000s, Microsoft Research Asia worked actively on audio classification, as shown in [9,10]. Lu in [9] used low-level audio features, including 8 order MFCCs and several other perceptual features, as well as kernel SVM (support vector machine) as the classifier in a cascaded scheme. The work in [10] also included perceptual features and used different classifiers in a cascaded classification scheme, including k-NN, LSP VQ and rule based methods (for smoothing). In this paper they used dynamic feature sets (use different features) for classifying different classes.
More recently, work on audio classification has increased. Some people work on exploiting new audio features, like [2,3,4,17], or new classifiers [13]. Others work on high level classification framework beyond the low level features, like [1,18]. Still others work on the applications based on audio classification, for example, determination of emotional content of video clips [16].
Other researchers are comparing existing feature extraction methods, classifier, and parameter selection schemes, making audio classification implementation practical, and even have prototype implemented on a Nokia cellphone [14,15].
Arrangements particularly focused on speech/music discrimination include [19] and [20].
REFERENCES
  • 1. Rui Cai, Lie Lu, Alan Hanjalic, Hong-Jiang Zhang, and Lian-Hong Cai, “A flexible framework for key audio effects detection and auditory context inference,” IEEE Transactions on audio, speech, and language processing, vol. 14, no. 13, May 2006. (MSRA group)
  • 2. Jalil Shirazi, and Shahrokh Ghaemmaghami, “Improvement to speech-music discrimination using sinusoidal model based features,” Multimed Tools Appl, vol. 50, pp. 415-435, 2010. (Islamic Azad University, Iran; and Sharif University of Technology, Iran)
  • 3. Zhong-Hua Fu, Jhing-Fa Wang and Lei Xie, “Noise robust features for speech-music discrimination in real-time telecommunication,” Multimedia and Expo, 2009 IEEE International Conference on (ICME 2009), pp. 574-577, 2009. (Northwestern Polytech Univ., China; and National Cheng Kung University, Taiwan)
  • 4. Ebru Dogan, et al., “Content-based classification and segmentation of mixed-type audio by using MPEG-7 features,” 2009 First International Conference on Advances in Multimedia, 2009. (ASELSAN Electronics Industries Inc.; and Baskent Univ.; and Middle East Technical Univ., Turkey)
  • 5. Erling Wold, Thom Blum, Douglas Keislar, and James Wheaton, “Content-based classification, search and retrieval of audio,” IEEE Multimedia Magazine, vol. 3, no. 3, pp. 27-36, 1996. (Muscle Fish)
  • 6. Jonathan Foote, “Content-based retrieval of music and audio,” Multimedia storage and archiving systems II, Proc. Of SPIE, vol. 3229, pp. 138-147, 1997. (National University of Singapore)
  • 7. Tong Zhang and C.-C. J. Kuo, “Audio-guided audiovisual data segmentation, indexing, and retrieval,” In Proc. Of SPIE storage and retrieval for Image and Video Databases VII, 1999. (Integrated Media System Center, USC)
  • 8. Zhu Liu, Yao Wang and Tsuhan Chen, “Audio feature extraction and analysis for scene segmentation and classification,” Journal of VLSI Signal Processing Systems, pp. 61-79, 1998. (Polytechnic University)
  • 9. Lie Lu, Stan Z. Li and Hong-Jiang Zhang, “Content-based audio segmentation using support vector machines,” ICME 2001. (MSRA)
  • 10. Lie Lu, Hao Jiang and Hongjiang Zhang, “A robust audio classification and segmentation method,” ACM Multimedia, 2001. (MSRA)
  • 11. Lie Lu and Alan Hanjalic, “Text-like segmentation of general audio for content-based retrieval,” IEEE Transactions on Multimedia, vol. 11, no. 4, June 2009.
  • 12. Eric Scheirer and Malcolm Slaney, “Construction and evaluation of a robust multifeature speech/music discriminator,” ICASSP 1997. (MIT Media Lab)
  • 13. Dong-Chul Park, “Classification of audio signals using Fuzzy c-means with divergence-based kernel,” Pattern Recognition Letters, vol. 30, issue 9, 2009. (Myong Ji University, Republic of Korea)
  • 14. Mikko Perttunen, Max Van Kleek, Ora Lassila, and Jukka Riekki, “Auditory context recognition using SVMs,” The second International Conference on Mobile Ubiquitous Computing, Systems, Services and Technologies (Ubicomm 2008), 2008. (University of Oulu, Finland; CSAIL, MIT; Nokia Research Center Cambridge, Mass.)
  • 15. Mikko Perttunen, Max Van Kleek, Ora Lassila, and Jukka Riekki, “An implementation of auditory context recognition for mobile devices,” Tenth International Conference on Mobile Data Management: Systems, Services and Middleware, 2009. (University of Oulu, Finland; CSAIL, MIT; Nokia Research Center Cambridge, Mass.)
  • 16. Rene Teixeira, Toshihiko Yamasaki, and Kiyoharu Aizawa, “Determination of emotional content of video clips by low-level audiovisual features,” Multimedia Tools and Applications, pp. 1-29, January 201. (University of Tokyo)
  • 17. Lei Xie, Zhong-Hua Fu, Wei Feng, and Yong Luo, “Pitch-density-based features and an SVM binary tree approach for multi-class audio classification in broadcast news,” Multimedia Systems, vol. 17, pp. 101-112, 2011. (Northwestern Polytechnic University, China)
  • 18. Lie Lu, and Alan Hanjalic, “Text-like segmentation of general audio for content-based retrieval,” IEEE Transactions on Multimedia, vol. 11, no. 4, pp. 658-699, 2009. (MSRA; Delft University of Technology, Netherlands)
  • 19. Chen et al, Mixed Type Audio Classification with Support Vector Machine, 2006 IEEE Int'l Conf on Multimedia and Expo, pp. 781-784.
  • 20. Harb et al, Robust Speech Music Discrimination Using Spectrum's First Order Statistics and Neural Networks, 7th Intl Symp. on Signal Proc. and its Applications, 2003.
Exemplary classifiers also include those detailed in patent publications 20020080286 (British Telecomm), 20020080286 (NEC), 20020080286 (Philips), 20030009325 (Deutsche Telekom), 20040210436 (Microsoft), 20100257129 and 20120109643 (Google), and U.S. Pat. No. 5,712,953 (Hewlett-Packard).
Other Remarks
Having described and illustrated the principles of our inventive work with reference to illustrative examples, it will be recognized that the technology is not so limited.
For example, while reference has been made to smartphones, it will be recognized that this technology finds utility with all manner of devices—both portable and fixed. PDAs, organizers, portable music players, desktop computers, laptop computers, tablet computers, netbooks, wearable computers, servers, etc., can all make use of the principles detailed herein.
Similarly, it is expected that head-worn devices (e.g., Google Glass goggles), and other unobtrusive sensor platforms will eventually replace today's smartphones. Naturally, the present technology can be used with such other forms of devices.
The term “smartphone” should be construed to encompass all such devices, even those that are not strictly-speaking cellular, nor telephones.
(Details of the iPhone, including its touch interface, are provided in Apple's published patent application 20080174570.)
The design of smartphones and other computers used in embodiments of the present technology is familiar to the artisan. In general terms, each includes one or more processors, one or more memories (e.g. RAM), storage (e.g., a disk or flash memory), a user interface (which may include, e.g., a keypad, a TFT LCD or OLED display screen, touch or other gesture sensors, a camera or other optical sensor, a compass sensor, a 3D magnetometer, a 3-axis accelerometer, a microphone, etc., together with software instructions for providing a graphical user interface), interconnections between these elements (e.g., buses), and an interface for communicating with other devices (which may be wireless, such as GSM, CDMA, W-CDMA, CDMA2000, TDMA, EV-DO, HSDPA, WiFi, WiMax, or Bluetooth, and/or wired, such as through an Ethernet local area network, a T-1 internet connection, etc.).
While this specification earlier noted its relation to the assignee's previous patent filings, it bears repeating. These disclosures should be read in concert and construed as a whole. Applicants intend that features in each be combined with features in the others. That is, it should be understood that the methods, elements and concepts disclosed in the present application be combined with the methods, elements and concepts detailed in those related applications. While some have been particularly detailed in the present specification, many have not—due to the large number of permutations and combinations is large. However, implementation of all such combinations is straightforward to the artisan from the provided teachings.
The processes and system components detailed in this specification may be implemented as instructions for computing devices, including general purpose processor instructions for a variety of programmable processors, including microprocessors, graphics processing units (GPUs, such as the nVidia Tegra APX 2600), digital signal processors (e.g., the Texas Instruments TMS320 series devices), etc. These instructions may be implemented as software, firmware, etc. These instructions can also be implemented to various forms of processor circuitry, including programmable logic devices, FPGAs (e.g., the noted Xilinx Virtex series devices), FPOAs (e.g., the noted PicoChip devices), and application specific circuits—including digital, analog and mixed analog/digital circuitry. Execution of the instructions can be distributed among processors and/or made parallel across processors within a device or across a network of devices. Transformation of content signal data may also be distributed among different processor and memory devices. References to “processors” or “modules” should be understood to refer to functionality, rather than requiring a particular form of hardware and/or software implementation.
Software instructions for implementing the detailed functionality can be readily authored by artisans, from the descriptions provided herein, e.g., written in C, C++, Visual Basic, Java, Python, Tcl, Perl, Scheme, Ruby, etc. Smartphones and other devices according to certain implementations of the present technology can include software modules for performing the different functions and acts. Known artificial intelligence systems and techniques can be employed to make the inferences, conclusions, and other determinations noted above.
Commonly, each device includes operating system software that provides interfaces to hardware resources and general purpose functions, and also includes application software which can be selectively invoked to perform particular tasks desired by a user. Known browser software, communications software, and media processing software can be adapted for many of the uses detailed herein. Software and hardware configuration data/instructions are commonly stored as instructions in one or more data structures conveyed by tangible media, such as magnetic or optical discs, memory cards, ROM, etc., which may be accessed across a network. Some embodiments may be implemented as embedded systems—a special purpose computer system in which the operating system software and the application software is indistinguishable to the user (e.g., as is commonly the case in basic cell phones). The functionality detailed in this specification can be implemented in operating system software, application software and/or as embedded system software.
While this disclosure has detailed particular ordering of acts and particular combinations of elements in the illustrative embodiments, it will be recognized that other contemplated methods may re-order acts (possibly omitting some and adding others), and other contemplated combinations may omit some elements, add others, and configure the elements differently, etc.
Although disclosed as a complete system, sub-combinations of the detailed arrangements are also separately contemplated.
While detailed primarily in the context of systems that perform audio capture and processing, corresponding arrangements are equally applicable to systems that capture and process visual stimulus (imagery), or that capture and process both imagery and audio.
Similarly, while certain aspects of the technology have been described by reference to illustrative methods, it will be recognized that apparatus configured to perform the acts of such methods are also contemplated as part of applicant's inventive work. Likewise, other aspects have been described by reference to illustrative apparatus, and the methodology performed by such apparatus is likewise within the scope of the present technology. Still further, tangible computer readable media containing instructions for configuring a processor or other programmable system to perform such methods is also expressly contemplated.
The reference to Bluetooth technology to indicate proximity to, and identity of, nearby persons is illustrative only. Many alternative technologies are known to perform one or both of these functions, and can be readily substituted.
The illustrations should be understood as exemplary and not limiting.
It is impossible to expressly catalog the myriad variations and combinations of the technology described herein. Applicants recognize and intend that the concepts of this specification can be combined, substituted and interchanged—both among and between themselves, as well as with those known from the cited prior art. Moreover, it will be recognized that the detailed technology can be included with other technologies—current and upcoming—to advantageous effect.
The reader is presumed to be familiar with the documents (including patent documents) referenced herein. To provide a comprehensive disclosure without unduly lengthening this specification, applicants incorporate-by-reference these documents referenced above. (Such documents are incorporated in their entireties, even if cited above in connection with specific of their teachings.) These references disclose technologies and teachings that can be incorporated into the arrangements detailed herein, and into which the technologies and teachings detailed herein can be incorporated.

Claims (20)

We claim:
1. A method comprising the acts:
applying a first classification procedure to received audio and/or visual information, to identify a type of the received audio and/or visual information from among two possible types: a first type, and a second type;
applying a first combination of plural recognition technologies to the received audio and/or visual information if the received audio and/or visual information is identified as the first type; and
applying a second combination of plural recognition technologies to the received audio and/or visual information if the received audio and/or visual information is identified as the second type;
wherein at least one of the applied combinations of plural recognition technologies includes a watermark- or fingerprint-based recognition technology, and the first and second combinations are different; and
wherein at least one of said acts is performed by hardware configured to perform such act(s);
the method further including:
applying a second classification procedure to second information to determine a contextual scenario type from among plural contextual scenario types, said second information including information different than the received audio and/or visual information;
applying said first combination of plural recognition technologies to the received audio and/or visual information if the received audio and/or visual information is identified as the first type, and the contextual scenario type is determined to be a first contextual scenario type; and
applying a third combination of plural recognition technologies to the received audio and/or visual information if the received audio and/or visual information is identified as the first type, and the contextual scenario is determined to be a second contextual scenario type different than said first contextual scenario type;
wherein the first and third combinations of plural recognition technologies are different.
2. The method of claim 1 that further includes:
applying a fourth combination of plural recognition technologies to the received audio and/or visual information if the received audio and/or visual information is identified as the first type, and the contextual scenario is determined to be a third contextual scenario type different than said first and second contextual scenario types;
wherein the first, third and fourth combinations of plural recognition technologies are different.
3. The method of claim 1 in which the second information includes an item of information selected from the group consisting of: time of day, day, day of week, location, calendar data, clock alarm status, motion sensor data, social network status, number of persons nearby, and identities of persons nearby.
4. The method of claim 3 in which the second information comprises two items of information selected from said group.
5. The method of claim 4 in which:
the received audio and/or visual information comprises audio information;
one recognition technology of said first combination of plural recognition technologies is a watermark decoding technology; and
one recognition technology of said second combination of plural recognition technologies is music recognition.
6. The method of claim 4 in which:
the received audio and/or visual information comprises audio information;
one recognition technology of said first combination of plural recognition technologies, is selected from a group consisting of: a first watermark decoding technology, a second watermark decoding technology, speech recognition, and music recognition; and
one recognition technology of said second combination of plural recognition technologies is also selected from said group.
7. The method of claim 6 in which two recognition technologies of said first combination of plural recognition technologies, are selected from said group, and two recognition technologies of said second combination of plural recognition technologies, are also selected from said group.
8. The method of claim 4 in which:
the received audio and/or visual information comprises audio information;
one recognition technology of said first combination of plural recognition technologies is watermark decoding; and
one recognition technology of said second combination of plural recognition technologies is fingerprint-based image recognition.
9. The method of claim 4 in which:
the received audio and/or visual information comprises audio information;
one recognition technology of said first combination of plural recognition technologies, is selected from a group consisting of: watermark decoding, fingerprint-based image recognition, optical character recognition, facial recognition, and barcode decoding; and
one recognition technology of said second combination of plural recognition technologies, is also selected from said group.
10. The method of claim 9 in which two recognition technologies of said first combination of plural recognition technologies, are selected from said group, and two recognition technologies of said second combination of plural recognition technologies, are also selected from said group.
11. The method of claim 1 that includes:
in one situation, when the received audio and/or visual information is identified as the second type, applying the second combination of plural recognition technologies to the received audio and/or visual information;
in a further situation, when the received audio and/or visual information is identified as the first type, and the contextual scenario type is determined to be the first contextual scenario type, applying said first combination of plural recognition technologies to the received audio and/or visual information; and
in a still further situation, when the received audio and/or visual information is identified as the first type, and the contextual scenario is determined to be the second contextual scenario type different than said first contextual scenario type, applying said third combination of plural recognition technologies to the received audio and/or visual information.
12. The method of claim 11 in which the received audio and/or visual information comprises visual information, and at least one of said applied combinations of plural recognition technologies includes barcode decoding.
13. A mobile system including a microphone, a camera, a memory, a processor, and a touchscreen, the memory including software instructions configuring the system to perform acts including:
applying a first classification procedure to received audio and/or visual information sensed by the microphone and/or camera, to identify a type of the received audio and/or visual information from among two possible types: a first type, and a second type;
applying a second classification procedure to second information to determine a contextual scenario type from among plural contextual scenario types, said second information including information different than the received audio and/or visual information;
applying a first combination of plural recognition technologies to the received audio and/or visual information in a first situation, in which the received audio and/or visual information is identified as the first type, and the contextual scenario type is determined to be a first contextual scenario type;
applying a second combination of plural recognition technologies to the received audio and/or visual information in a second situation, in which the received audio and/or visual information is identified as the second type different than the first type; and
applying a third combination of plural recognition technologies to the received audio and/or visual information in a third situation, in which the received audio and/or visual information is identified as the first type, and the contextual scenario is determined to be a second contextual scenario type different than said first contextual scenario type;
wherein at least one of the recognition technologies, in one of said applied combinations of plural recognition technologies, is a watermark- or fingerprint-based recognition technology; and
the first, second and third combinations of recognition technologies are all different.
14. The system of claim 13 in which the second information includes an item of information selected from the group consisting of: time of day, day, day of week, location, calendar data, clock alarm status, motion sensor data, social network status, number of persons nearby, and identities of persons nearby.
15. The system of claim 13 in which the first and second contextual scenario types are characterized by a confluence of two or more different contextual conditions, at least one of which includes time of day, day, day of week, location, calendar data, clock alarm status, motion sensor data, social network status, number of persons nearby, and identities of persons nearby.
16. The system of claim 13 in which the received audio and/or visual information comprises visual information, and at least one of said applied combinations of plural recognition technologies includes barcode decoding.
17. A non-transitory computer readable medium that contains software instructions for configuring a programmable hardware system to perform acts including:
applying a first classification procedure to received audio and/or visual information, to identify a type of the received audio and/or visual information from among two possible types: a first type, and a second type;
applying a second classification procedure to second information to determine a contextual scenario type from among plural contextual scenario types, said second information including information different than the received audio and/or visual information;
applying a first combination of plural recognition technologies to the received audio and/or visual information in a first situation, in which the received audio and/or visual information is identified as the first type, and the contextual scenario type is determined to be a first contextual scenario type;
applying a second combination of plural recognition technologies to the received audio and/or visual information in a second situation, in which the received audio and/or visual information is identified as the second type different than the first type; and
applying a third combination of plural recognition technologies to the received audio and/or visual information in a third situation, in which the received audio and/or visual information is identified as the first type, and the contextual scenario is determined to be a second contextual scenario type different than said first contextual scenario type;
wherein at least one of the recognition technologies, in one of said applied combinations of plural recognition technologies, is a watermark- or fingerprint-based recognition technology; and
the first, second and third combinations of recognition technologies are all different.
18. The non-transitory computer readable medium of claim 17 in which the second information includes an item of information selected from the group consisting of: time of day, day, day of week, location, calendar data, clock alarm status, motion sensor data, social network status, number of persons nearby, and identities of persons nearby.
19. The non-transitory computer readable medium of claim 17 in which the first and second contextual scenario types are characterized by a confluence of two or more different contextual conditions, at least one of which includes time of day, day, day of week, location, calendar data, clock alarm status, motion sensor data, social network status, number of persons nearby, and identities of persons nearby.
20. The non-transitory computer readable medium of claim 17 in which the received audio and/or visual information comprises visual information, and at least one of said applied combinations of plural recognition technologies includes barcode decoding.
US14/947,008 2011-04-04 2015-11-20 Context-based smartphone sensor logic Active US9595258B2 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US14/947,008 US9595258B2 (en) 2011-04-04 2015-11-20 Context-based smartphone sensor logic
US15/446,837 US20170243584A1 (en) 2011-09-23 2017-03-01 Context-based smartphone sensor logic
US15/711,357 US10199042B2 (en) 2011-04-04 2017-09-21 Context-based smartphone sensor logic
US16/262,634 US10510349B2 (en) 2011-04-04 2019-01-30 Context-based smartphone sensor logic
US16/709,463 US10930289B2 (en) 2011-04-04 2019-12-10 Context-based smartphone sensor logic

Applications Claiming Priority (15)

Application Number Priority Date Filing Date Title
US201161471651P 2011-04-04 2011-04-04
US201161479323P 2011-04-26 2011-04-26
US201161483555P 2011-05-06 2011-05-06
US201161485888P 2011-05-13 2011-05-13
US201161501602P 2011-06-27 2011-06-27
US13/174,258 US8831279B2 (en) 2011-03-04 2011-06-30 Smartphone-based methods and systems
US13/207,841 US9218530B2 (en) 2010-11-04 2011-08-11 Smartphone-based methods and systems
US201161538578P 2011-09-23 2011-09-23
US201161542737P 2011-10-03 2011-10-03
US13/278,949 US9183580B2 (en) 2010-11-04 2011-10-21 Methods and systems for resource management on portable devices
PCT/US2011/059412 WO2012061760A2 (en) 2010-11-04 2011-11-04 Smartphone-based methods and systems
US13/299,140 US8819172B2 (en) 2010-11-04 2011-11-17 Smartphone-based methods and systems
US13/607,095 US9196028B2 (en) 2011-09-23 2012-09-07 Context-based smartphone sensor logic
US14/157,108 US9330427B2 (en) 2010-11-04 2014-01-16 Smartphone-based methods and systems
US14/947,008 US9595258B2 (en) 2011-04-04 2015-11-20 Context-based smartphone sensor logic

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US13/607,095 Division US9196028B2 (en) 2011-04-04 2012-09-07 Context-based smartphone sensor logic
US15/711,357 Division US10199042B2 (en) 2011-04-04 2017-09-21 Context-based smartphone sensor logic

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US15/446,837 Continuation US20170243584A1 (en) 2011-04-04 2017-03-01 Context-based smartphone sensor logic

Publications (2)

Publication Number Publication Date
US20160232898A1 US20160232898A1 (en) 2016-08-11
US9595258B2 true US9595258B2 (en) 2017-03-14

Family

ID=47914762

Family Applications (6)

Application Number Title Priority Date Filing Date
US13/607,095 Active 2033-09-14 US9196028B2 (en) 2011-04-04 2012-09-07 Context-based smartphone sensor logic
US14/947,008 Active US9595258B2 (en) 2011-04-04 2015-11-20 Context-based smartphone sensor logic
US15/446,837 Abandoned US20170243584A1 (en) 2011-04-04 2017-03-01 Context-based smartphone sensor logic
US15/711,357 Active US10199042B2 (en) 2011-04-04 2017-09-21 Context-based smartphone sensor logic
US16/262,634 Active US10510349B2 (en) 2011-04-04 2019-01-30 Context-based smartphone sensor logic
US16/709,463 Active US10930289B2 (en) 2011-04-04 2019-12-10 Context-based smartphone sensor logic

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US13/607,095 Active 2033-09-14 US9196028B2 (en) 2011-04-04 2012-09-07 Context-based smartphone sensor logic

Family Applications After (4)

Application Number Title Priority Date Filing Date
US15/446,837 Abandoned US20170243584A1 (en) 2011-04-04 2017-03-01 Context-based smartphone sensor logic
US15/711,357 Active US10199042B2 (en) 2011-04-04 2017-09-21 Context-based smartphone sensor logic
US16/262,634 Active US10510349B2 (en) 2011-04-04 2019-01-30 Context-based smartphone sensor logic
US16/709,463 Active US10930289B2 (en) 2011-04-04 2019-12-10 Context-based smartphone sensor logic

Country Status (6)

Country Link
US (6) US9196028B2 (en)
EP (1) EP2758956B1 (en)
JP (1) JP6251906B2 (en)
KR (1) KR20140064969A (en)
CN (1) CN103918247B (en)
WO (1) WO2013043393A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10199042B2 (en) 2011-04-04 2019-02-05 Digimarc Corporation Context-based smartphone sensor logic
US11049094B2 (en) 2014-02-11 2021-06-29 Digimarc Corporation Methods and arrangements for device to device communication

Families Citing this family (85)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8504062B2 (en) 2010-11-01 2013-08-06 Wavemarket, Inc. System and method for aggregating and associating mobile device location data
US9484046B2 (en) 2010-11-04 2016-11-01 Digimarc Corporation Smartphone-based methods and systems
US10445464B2 (en) 2012-02-17 2019-10-15 Location Labs, Inc. System and method for detecting medical anomalies using a mobile communication device
US9244499B2 (en) * 2012-06-08 2016-01-26 Apple Inc. Multi-stage device orientation detection
US10013670B2 (en) * 2012-06-12 2018-07-03 Microsoft Technology Licensing, Llc Automatic profile selection on mobile devices
US9292864B2 (en) * 2012-06-20 2016-03-22 Intel Corporation Wireless communication device and methods for synched distributed advertisement for device-to-device discovery
US9104467B2 (en) 2012-10-14 2015-08-11 Ari M Frank Utilizing eye tracking to reduce power consumption involved in measuring affective response
US9477993B2 (en) 2012-10-14 2016-10-25 Ari M Frank Training a predictor of emotional response based on explicit voting on content and eye tracking to verify attention
US9305559B2 (en) 2012-10-15 2016-04-05 Digimarc Corporation Audio watermark encoding with reversing polarity and pairwise embedding
US9401153B2 (en) 2012-10-15 2016-07-26 Digimarc Corporation Multi-mode audio recognition and auxiliary data encoding and decoding
US9460204B2 (en) * 2012-10-19 2016-10-04 Sony Corporation Apparatus and method for scene change detection-based trigger for audio fingerprinting analysis
US20140114716A1 (en) * 2012-10-20 2014-04-24 Moritz Tim Flögel Method and system for proximity reminders
US9721010B2 (en) * 2012-12-13 2017-08-01 Microsoft Technology Licensing, Llc Content reaction annotations
US10574744B2 (en) 2013-01-31 2020-02-25 Dell Products L.P. System and method for managing peer-to-peer information exchanges
US10049336B2 (en) 2013-02-14 2018-08-14 Sociometric Solutions, Inc. Social sensing and behavioral analysis system
US9565526B2 (en) * 2013-02-25 2017-02-07 Dell Products L.P. System and method for dynamic geo-fencing
US8990638B1 (en) 2013-03-15 2015-03-24 Digimarc Corporation Self-stabilizing network nodes in mobile discovery system
WO2014188231A1 (en) * 2013-05-22 2014-11-27 Nokia Corporation A shared audio scene apparatus
CA2913538C (en) * 2013-06-07 2022-01-04 Sociometric Solutions, Inc. Social sensing and behavioral analysis system
CN103309618A (en) 2013-07-02 2013-09-18 姜洪明 Mobile operating system
US9894489B2 (en) * 2013-09-30 2018-02-13 William J. Johnson System and method for situational proximity observation alerting privileged recipients
US9466009B2 (en) 2013-12-09 2016-10-11 Nant Holdings Ip. Llc Feature density object classification, systems and methods
US20150162000A1 (en) * 2013-12-10 2015-06-11 Harman International Industries, Incorporated Context aware, proactive digital assistant
US20150278737A1 (en) * 2013-12-30 2015-10-01 Google Inc. Automatic Calendar Event Generation with Structured Data from Free-Form Speech
US9807291B1 (en) 2014-01-29 2017-10-31 Google Inc. Augmented video processing
US9794475B1 (en) 2014-01-29 2017-10-17 Google Inc. Augmented video capture
US9367613B1 (en) 2014-01-30 2016-06-14 Google Inc. Song identification trigger
US9402155B2 (en) * 2014-03-03 2016-07-26 Location Labs, Inc. System and method for indicating a state of a geographic area based on mobile device sensor measurements
US9179184B1 (en) 2014-06-20 2015-11-03 Google Inc. Methods, systems, and media for detecting a presentation of media content on a display device
US10824954B1 (en) * 2014-06-25 2020-11-03 Bosch Sensortec Gmbh Methods and apparatus for learning sensor data patterns of physical-training activities
US9589118B2 (en) * 2014-08-20 2017-03-07 Google Technology Holdings LLC Context-based authentication mode selection
US11580501B2 (en) * 2014-12-09 2023-02-14 Samsung Electronics Co., Ltd. Automatic detection and analytics using sensors
CN105791469B (en) * 2014-12-19 2019-01-18 宏达国际电子股份有限公司 Mobile communications device and its control method
CN104505095A (en) * 2014-12-22 2015-04-08 上海语知义信息技术有限公司 Voice control system and voice control method for alarm clock
KR101634356B1 (en) * 2015-01-26 2016-06-29 에스케이텔레콤 주식회사 Apparatus for analyzing sounds, and control method thereof
US9591447B2 (en) 2015-02-27 2017-03-07 Toyota Motor Engineering & Manufacturing North America, Inc. Systems and methods for providing contextual environmental information
WO2016154598A1 (en) * 2015-03-25 2016-09-29 Carnegie Mellon University System and method for adaptive, rapidly deployable, human-intelligent sensor feeds
EP3275122A4 (en) 2015-03-27 2018-11-21 Intel Corporation Avatar facial expression and/or speech driven animations
US10715468B2 (en) * 2015-03-27 2020-07-14 Intel Corporation Facilitating tracking of targets and generating and communicating of messages at computing devices
JP6448795B2 (en) * 2015-04-21 2019-01-09 華為技術有限公司Huawei Technologies Co.,Ltd. Method, apparatus, and terminal device for setting fingerprint sensor interrupt threshold
US10180339B1 (en) 2015-05-08 2019-01-15 Digimarc Corporation Sensing systems
US20160343237A1 (en) 2015-05-22 2016-11-24 Google Inc. Systems and methods of integrating sensor output of a mobile device with a security system
US10788800B2 (en) 2015-06-05 2020-09-29 Apple Inc. Data-driven context determination
CN104965426A (en) 2015-06-24 2015-10-07 百度在线网络技术(北京)有限公司 Intelligent robot control system, method and device based on artificial intelligence
US11343413B2 (en) * 2015-07-02 2022-05-24 Gopro, Inc. Automatically determining a wet microphone condition in a camera
US9769364B2 (en) * 2015-07-02 2017-09-19 Gopro, Inc. Automatically determining a wet microphone condition in a sports camera
US10701165B2 (en) 2015-09-23 2020-06-30 Sensoriant, Inc. Method and system for using device states and user preferences to create user-friendly environments
US10902043B2 (en) * 2016-01-03 2021-01-26 Gracenote, Inc. Responding to remote media classification queries using classifier models and context parameters
US9859731B2 (en) 2016-01-15 2018-01-02 International Business Machines Corporation Alternate alarm notifications based on battery condition
US9734701B2 (en) 2016-01-20 2017-08-15 International Business Machines Corporation Alternative alarm generator
US9948479B2 (en) 2016-04-05 2018-04-17 Vivint, Inc. Identification graph theory
JP6454748B2 (en) 2016-05-18 2019-01-16 レノボ・シンガポール・プライベート・リミテッド Method for certifying presence / absence of user, method for controlling device, and electronic apparatus
KR101939756B1 (en) 2016-07-05 2019-01-18 현대자동차주식회사 Internet of things system and control method thereof
CN106131341B (en) * 2016-08-22 2018-11-30 维沃移动通信有限公司 A kind of photographic method and mobile terminal
EP3301891B1 (en) * 2016-09-28 2019-08-28 Nxp B.V. Mobile device and method for determining its context
CN106550147A (en) * 2016-10-28 2017-03-29 努比亚技术有限公司 A kind of voice broadcast alarm set and its method
CA3046803C (en) * 2016-12-14 2023-01-10 Novetechnologies, LLC Livestock biosecurity system and method of use
US10488912B1 (en) 2017-01-27 2019-11-26 Digimarc Corporation Method and apparatus for analyzing sensor data
US10614826B2 (en) 2017-05-24 2020-04-07 Modulate, Inc. System and method for voice-to-voice conversion
CN107204194A (en) * 2017-05-27 2017-09-26 冯小平 Determine user's local environment and infer the method and apparatus of user view
US10395650B2 (en) * 2017-06-05 2019-08-27 Google Llc Recorded media hotword trigger suppression
US10733575B2 (en) * 2017-06-06 2020-08-04 Cisco Technology, Inc. Automatic generation of reservations for a meeting-space for disturbing noise creators
US11367346B2 (en) 2017-06-07 2022-06-21 Nexar, Ltd. Digitizing and mapping the public space using collaborative networks of mobile agents and cloud nodes
CN107463940B (en) * 2017-06-29 2020-02-21 清华大学 Vehicle type identification method and device based on mobile phone data
TWI640900B (en) * 2017-12-27 2018-11-11 中華電信股份有限公司 System and method of augmented reality for mobile vehicle
US20190235831A1 (en) * 2018-01-31 2019-08-01 Amazon Technologies, Inc. User input processing restriction in a speech processing system
US10432779B2 (en) 2018-02-23 2019-10-01 Motorola Mobility Llc Communication session modifications based on a proximity context
US10692496B2 (en) 2018-05-22 2020-06-23 Google Llc Hotword suppression
WO2020008862A1 (en) * 2018-07-02 2020-01-09 ソニー株式会社 Information processing apparatus, information processing method, and information processing apparatus-readable recording medium
US11294976B1 (en) * 2018-07-22 2022-04-05 Tuple Software LLC Ad-hoc venue engagement system
CN109448703B (en) * 2018-11-14 2021-05-11 山东师范大学 Audio scene recognition method and system combining deep neural network and topic model
WO2020128897A2 (en) 2018-12-20 2020-06-25 Cochlear Limited Environmental classification controlled output level in bone conduction devices
US11386636B2 (en) 2019-04-04 2022-07-12 Datalogic Usa, Inc. Image preprocessing for optical character recognition
US10924661B2 (en) * 2019-05-02 2021-02-16 International Business Machines Corporation Generating image capture configurations and compositions
KR20200142314A (en) 2019-06-12 2020-12-22 삼성전자주식회사 Electronic device and method of providing notification information thereof
US11948690B2 (en) 2019-07-23 2024-04-02 Samsung Electronics Co., Ltd. Pulmonary function estimation
CN110442241A (en) * 2019-08-09 2019-11-12 Oppo广东移动通信有限公司 Schedule display methods, device, mobile terminal and computer readable storage medium
WO2021030759A1 (en) * 2019-08-14 2021-02-18 Modulate, Inc. Generation and detection of watermark for real-time voice conversion
US11698676B2 (en) 2019-08-23 2023-07-11 Samsung Electronics Co., Ltd. Method and electronic device for eye-tracking
CN110660385A (en) * 2019-09-30 2020-01-07 出门问问信息科技有限公司 Command word detection method and electronic equipment
US11458409B2 (en) * 2020-05-27 2022-10-04 Nvidia Corporation Automatic classification and reporting of inappropriate language in online applications
CN112183401A (en) * 2020-09-30 2021-01-05 敦泰电子(深圳)有限公司 Image acquisition method, chip and image acquisition device
CN112287691B (en) * 2020-11-10 2024-02-13 深圳市天彦通信股份有限公司 Conference recording method and related equipment
US11830489B2 (en) 2021-06-30 2023-11-28 Bank Of America Corporation System and method for speech processing based on response content
US20230136608A1 (en) * 2021-10-28 2023-05-04 Capped Out Media System and methods for advertisement enhancement

Citations (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5799068A (en) 1992-06-29 1998-08-25 Elonex I.P. Holdings Ltd. Smart phone integration with computer systems
JP2000322078A (en) 1999-05-14 2000-11-24 Sumitomo Electric Ind Ltd On-vehicle voice recognition device
US20020055924A1 (en) 2000-01-18 2002-05-09 Richard Liming System and method providing a spatial location context
US20020065063A1 (en) * 1999-05-24 2002-05-30 Christopher R. Uhlik System and method for emergency call channel allocation
US20030062419A1 (en) 2001-07-13 2003-04-03 Welch Allyn Data Collection, Inc. Optical reader having a color imager
US20040138877A1 (en) 2002-12-27 2004-07-15 Kabushiki Kaisha Toshiba Speech input apparatus and method
US20050198095A1 (en) 2003-12-31 2005-09-08 Kavin Du System and method for obtaining information relating to an item of commerce using a portable imaging device
US20060009702A1 (en) 2004-04-30 2006-01-12 Olympus Corporation User support apparatus
US20060031684A1 (en) 2004-08-06 2006-02-09 Sharma Ravi K Fast signal detection and distributed computing in portable computing devices
US20060047704A1 (en) 2004-08-31 2006-03-02 Kumar Chitra Gopalakrishnan Method and system for providing information services relevant to visual imagery
US20060107219A1 (en) 2004-05-26 2006-05-18 Motorola, Inc. Method to enhance user interface and target applications based on context awareness
US7076737B2 (en) 1998-12-18 2006-07-11 Tangis Corporation Thematic response to a computer user's context, such as by a wearable personal computer
US20070100480A1 (en) 2005-10-28 2007-05-03 Microsoft Corporation Multi-modal device power/mode management
US20070112567A1 (en) 2005-11-07 2007-05-17 Scanscout, Inc. Techiques for model optimization for statistical pattern recognition
US20070192352A1 (en) 2005-12-21 2007-08-16 Levy Kenneth L Content Metadata Directory Services
US20070286463A1 (en) 2006-06-09 2007-12-13 Sony Ericsson Mobile Communications Ab Media identification
JP2008009120A (en) 2006-06-29 2008-01-17 Mitsubishi Electric Corp Remote controller and household electrical appliance
US20080267504A1 (en) 2007-04-24 2008-10-30 Nokia Corporation Method, device and computer program product for integrating code-based and optical character recognition technologies into a mobile visual search
US20080296392A1 (en) 2007-05-31 2008-12-04 Connell Ii Jonathan H Portable device-based shopping checkout
US20090002491A1 (en) 2005-09-16 2009-01-01 Haler Robert D Vehicle-mounted video system with distributed processing
US20090031814A1 (en) 2006-10-23 2009-02-05 Kiyoaki Takiguchi Marker detection apparatus and marker detection method
US7512889B2 (en) 1998-12-18 2009-03-31 Microsoft Corporation Method and system for controlling presentation of information to a user based on the user's condition
US20090164896A1 (en) 2007-12-20 2009-06-25 Karl Ola Thorn System and method for dynamically changing a display
US7565294B2 (en) 1999-05-19 2009-07-21 Digimarc Corporation Methods and systems employing digital content
US20100004926A1 (en) 2008-06-30 2010-01-07 Waves Audio Ltd. Apparatus and method for classification and segmentation of audio content, based on the audio signal
US20100036717A1 (en) 2004-12-29 2010-02-11 Bernard Trest Dynamic Information System
US20100048242A1 (en) 2008-08-19 2010-02-25 Rhoads Geoffrey B Methods and systems for content processing
US20100070284A1 (en) * 2008-03-03 2010-03-18 Lg Electronics Inc. Method and an apparatus for processing a signal
US20100070272A1 (en) 2008-03-04 2010-03-18 Lg Electronics Inc. method and an apparatus for processing a signal
US20100162105A1 (en) 2008-12-19 2010-06-24 Palm, Inc. Access and management of cross-platform calendars
US20100158310A1 (en) 2008-12-23 2010-06-24 Datalogic Scanning, Inc. Method and apparatus for identifying and tallying objects
US7774504B2 (en) 2005-01-19 2010-08-10 Truecontext Corporation Policy-driven mobile forms applications
US20100226526A1 (en) 2008-12-31 2010-09-09 Modro Sierra K Mobile media, devices, and signaling
US20100271365A1 (en) 2009-03-01 2010-10-28 Facecake Marketing Technologies, Inc. Image Transformation Systems and Methods
US20100277611A1 (en) 2009-05-01 2010-11-04 Adam Holt Automatic content tagging, such as tagging digital images via a wireless cellular network using metadata and facial recognition
US20100318470A1 (en) 2009-05-13 2010-12-16 Christoph Meinel Means for Processing Information
US20110043652A1 (en) 2009-03-12 2011-02-24 King Martin T Automatically providing content associated with captured information, such as information captured in real-time
US20110098056A1 (en) * 2009-10-28 2011-04-28 Rhoads Geoffrey B Intuitive computing methods and systems
US20110125735A1 (en) 2009-08-07 2011-05-26 David Petrou Architecture for responding to a visual query
US20110131040A1 (en) 2009-12-01 2011-06-02 Honda Motor Co., Ltd Multi-mode speech recognition
US20110137895A1 (en) 2009-12-03 2011-06-09 David Petrou Hybrid Use of Location Sensor Data and Visual Query to Return Local Listings for Visual Query
US20110150292A1 (en) 2000-11-06 2011-06-23 Boncyk Wayne C Object Information Derived from Object Images
US20110153050A1 (en) 2008-08-26 2011-06-23 Dolby Laboratories Licensing Corporation Robust Media Fingerprints
US20110159921A1 (en) 2009-12-31 2011-06-30 Davis Bruce L Methods and arrangements employing sensor-equipped smart phones
US20110161076A1 (en) 2009-12-31 2011-06-30 Davis Bruce L Intuitive Computing Methods and Systems
US20110212717A1 (en) 2008-08-19 2011-09-01 Rhoads Geoffrey B Methods and Systems for Content Processing
WO2011116309A1 (en) 2010-03-19 2011-09-22 Digimarc Corporation Intuitive computing methods and systems
US20110233278A1 (en) 2004-07-29 2011-09-29 Symbol Technologies, Inc. Point-of-transaction workstation for electro-optically reading one-dimensional and two-dimensional indicia by image capture
US20110244919A1 (en) 2010-03-19 2011-10-06 Aller Joshua V Methods and Systems for Determining Image Processing Operations Relevant to Particular Imagery
US20120013766A1 (en) 2004-11-29 2012-01-19 Rothschild Trust Holdings, Llc Device and method for embedding and retrieving information in digital images
US20120023060A1 (en) 2005-12-29 2012-01-26 Apple Inc. Electronic device with automatic mode switching
US20120034904A1 (en) 2010-08-06 2012-02-09 Google Inc. Automatically Monitoring for Voice Input Based on Context
US20120059780A1 (en) 2009-05-22 2012-03-08 Teknologian Tutkimuskeskus Vtt Context recognition in mobile devices
US20120075168A1 (en) 2010-09-14 2012-03-29 Osterhout Group, Inc. Eyepiece with uniformly illuminated reflective display
US20120078397A1 (en) 2010-04-08 2012-03-29 Qualcomm Incorporated System and method of smart audio logging for mobile devices
US20120143655A1 (en) 2009-06-30 2012-06-07 Kabushiki Kaisha Toshiba Checkout apparatus and working state measurement apparatus
US20120141660A1 (en) 2009-08-14 2012-06-07 Michael Fiedler Secure identification of a product
US20120224743A1 (en) 2011-03-04 2012-09-06 Rodriguez Tony F Smartphone-based methods and systems
US20120284012A1 (en) 2010-11-04 2012-11-08 Rodriguez Tony F Smartphone-Based Methods and Systems
US20130007201A1 (en) 2011-06-29 2013-01-03 Gracenote, Inc. Interactive streaming content apparatus, systems and methods
US20130044233A1 (en) 2011-08-17 2013-02-21 Yang Bai Emotional illumination, and related arrangements
WO2013043393A1 (en) 2011-09-23 2013-03-28 Digimarc Corporation Context-based smartphone sensor logic
US20130090926A1 (en) 2011-09-16 2013-04-11 Qualcomm Incorporated Mobile device context information using speech detection
US20130097630A1 (en) 2011-10-14 2013-04-18 Tony F. Rodriguez Arrangements employing content identification and/or distribution identification data
US20130254422A2 (en) 2010-05-04 2013-09-26 Soundhound, Inc. Systems and Methods for Sound Recognition

Family Cites Families (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6614914B1 (en) 1995-05-08 2003-09-02 Digimarc Corporation Watermark embedder and reader
US6122403A (en) 1995-07-27 2000-09-19 Digimarc Corporation Computer system linked by using information in data objects
US6947571B1 (en) 1999-05-19 2005-09-20 Digimarc Corporation Cell phones with optical capabilities, and related applications
US6182218B1 (en) 1994-12-13 2001-01-30 Mitsubishi Corporation Digital content management system using electronic watermark
US20030133592A1 (en) 1996-05-07 2003-07-17 Rhoads Geoffrey B. Content objects with computer instructions steganographically encoded therein, and associated methods
US6590996B1 (en) 2000-02-14 2003-07-08 Digimarc Corporation Color adaptive watermarking
US5712953A (en) 1995-06-28 1998-01-27 Electronic Data Systems Corporation System and method for classification of audio or audio/video signals based on musical content
US6829368B2 (en) 2000-01-26 2004-12-07 Digimarc Corporation Establishing and interacting with on-line media collections using identifiers in media signals
US7003731B1 (en) 1995-07-27 2006-02-21 Digimare Corporation User control and activation of watermark enabled objects
US7562392B1 (en) 1999-05-19 2009-07-14 Digimarc Corporation Methods of interacting with audio and ambient music
US7930546B2 (en) 1996-05-16 2011-04-19 Digimarc Corporation Methods, systems, and sub-combinations useful in media identification
US6819863B2 (en) 1998-01-13 2004-11-16 Koninklijke Philips Electronics N.V. System and method for locating program boundaries and commercial boundaries using audio categories
ES2247741T3 (en) 1998-01-22 2006-03-01 Deutsche Telekom Ag SIGNAL CONTROLLED SWITCHING METHOD BETWEEN AUDIO CODING SCHEMES.
US7006555B1 (en) 1998-07-16 2006-02-28 Nielsen Media Research, Inc. Spectral audio encoding
US7676372B1 (en) 1999-02-16 2010-03-09 Yugen Kaisha Gm&M Prosthetic hearing device that transforms a detected speech into a speech of a speech form assistive in understanding the semantic meaning in the detected speech
US7185201B2 (en) 1999-05-19 2007-02-27 Digimarc Corporation Content identifiers triggering corresponding responses
US6968564B1 (en) 2000-04-06 2005-11-22 Nielsen Media Research, Inc. Multi-band spectral audio encoding
US6901362B1 (en) 2000-04-19 2005-05-31 Microsoft Corporation Audio segmentation and classification
US8121843B2 (en) 2000-05-02 2012-02-21 Digimarc Corporation Fingerprint methods and systems for media signals
AU2001287574A1 (en) 2000-06-30 2002-01-08 Ingenium Pharmaceuticals Ag Human g protein-coupled receptor igpcr20, and uses thereof
US6990453B2 (en) 2000-07-31 2006-01-24 Landmark Digital Services Llc System and methods for recognizing sound and music signals in high noise and distortion
US7359889B2 (en) 2001-03-02 2008-04-15 Landmark Digital Services Llc Method and apparatus for automatically creating database for use in automated media recognition system
WO2003062960A2 (en) 2002-01-22 2003-07-31 Digimarc Corporation Digital watermarking and fingerprinting including symchronization, layering, version control, and compressed embedding
US20040091111A1 (en) 2002-07-16 2004-05-13 Levy Kenneth L. Digital watermarking and fingerprinting applications
JP2004096520A (en) * 2002-09-02 2004-03-25 Hosiden Corp Sound recognition remote controller
DE60326743D1 (en) 2002-09-30 2009-04-30 Gracenote Inc FINGERPRINT EXTRACTION
KR20050086470A (en) 2002-11-12 2005-08-30 코닌클리케 필립스 일렉트로닉스 엔.브이. Fingerprinting multimedia contents
US7370190B2 (en) 2005-03-03 2008-05-06 Digimarc Corporation Data processing systems and methods with enhanced bios functionality
US7516074B2 (en) 2005-09-01 2009-04-07 Auditude, Inc. Extraction and matching of characteristic fingerprints from audio signals
EP1977370A4 (en) 2006-01-23 2011-02-23 Digimarc Corp Methods, systems, and subcombinations useful with physical articles
EP1895745B1 (en) * 2006-08-31 2015-04-22 Swisscom AG Method and communication system for continuous recording of data from the environment
US8564544B2 (en) 2006-09-06 2013-10-22 Apple Inc. Touch screen device, method, and graphical user interface for customizing display of content category icons
US8565815B2 (en) 2006-11-16 2013-10-22 Digimarc Corporation Methods and systems responsive to features sensed from imagery or other data
US20080243806A1 (en) 2007-03-26 2008-10-02 Roger Dalal Accessing information on portable cellular electronic devices
CN101038177B (en) * 2007-04-02 2012-08-15 深圳市赛格导航科技股份有限公司 GPS navigation apparatus
US7912444B2 (en) 2007-04-23 2011-03-22 Sony Ericsson Mobile Communications Ab Media portion selection system and method
US8417259B2 (en) 2008-03-31 2013-04-09 At&T Mobility Ii Llc Localized detection of mobile devices
US20090259936A1 (en) * 2008-04-10 2009-10-15 Nokia Corporation Methods, Apparatuses and Computer Program Products for Generating A Preview of A Content Item
US8520979B2 (en) * 2008-08-19 2013-08-27 Digimarc Corporation Methods and systems for content processing
US20100205628A1 (en) 2009-02-12 2010-08-12 Davis Bruce L Media processing methods and arrangements
US9788043B2 (en) 2008-11-07 2017-10-10 Digimarc Corporation Content interaction methods and systems employing portable devices
US9117268B2 (en) 2008-12-17 2015-08-25 Digimarc Corporation Out of phase digital watermarking in two chrominance directions
CA2754061A1 (en) 2009-03-03 2010-09-10 Digimarc Corporation Narrowcasting from public displays, and related arrangements
EP2406787B1 (en) 2009-03-11 2014-05-14 Google, Inc. Audio classification for information retrieval using sparse features
US10304069B2 (en) 2009-07-29 2019-05-28 Shopkick, Inc. Method and system for presentment and redemption of personalized discounts
US8768313B2 (en) 2009-08-17 2014-07-01 Digimarc Corporation Methods and systems for image or audio recognition processing
US8819172B2 (en) 2010-11-04 2014-08-26 Digimarc Corporation Smartphone-based methods and systems
US9183580B2 (en) 2010-11-04 2015-11-10 Digimarc Corporation Methods and systems for resource management on portable devices
CN102118886A (en) * 2010-01-04 2011-07-06 中国移动通信集团公司 Recognition method of voice information and equipment
US8401224B2 (en) 2010-05-05 2013-03-19 Digimarc Corporation Hidden image signalling
US20120046071A1 (en) 2010-08-20 2012-02-23 Robert Craig Brandis Smartphone-based user interfaces, such as for browsing print media
US8521541B2 (en) 2010-11-02 2013-08-27 Google Inc. Adaptive audio transcoding
EP2503852A1 (en) 2011-03-22 2012-09-26 Koninklijke Philips Electronics N.V. Light detection system and method
US9904394B2 (en) 2013-03-13 2018-02-27 Immerson Corporation Method and devices for displaying graphical user interfaces based on user contact

Patent Citations (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5799068A (en) 1992-06-29 1998-08-25 Elonex I.P. Holdings Ltd. Smart phone integration with computer systems
US7076737B2 (en) 1998-12-18 2006-07-11 Tangis Corporation Thematic response to a computer user's context, such as by a wearable personal computer
US7512889B2 (en) 1998-12-18 2009-03-31 Microsoft Corporation Method and system for controlling presentation of information to a user based on the user's condition
JP2000322078A (en) 1999-05-14 2000-11-24 Sumitomo Electric Ind Ltd On-vehicle voice recognition device
US7565294B2 (en) 1999-05-19 2009-07-21 Digimarc Corporation Methods and systems employing digital content
US20020065063A1 (en) * 1999-05-24 2002-05-30 Christopher R. Uhlik System and method for emergency call channel allocation
US20020055924A1 (en) 2000-01-18 2002-05-09 Richard Liming System and method providing a spatial location context
US20110150292A1 (en) 2000-11-06 2011-06-23 Boncyk Wayne C Object Information Derived from Object Images
US20030062419A1 (en) 2001-07-13 2003-04-03 Welch Allyn Data Collection, Inc. Optical reader having a color imager
US6722569B2 (en) 2001-07-13 2004-04-20 Welch Allyn Data Collection, Inc. Optical reader having a color imager
US20040138877A1 (en) 2002-12-27 2004-07-15 Kabushiki Kaisha Toshiba Speech input apparatus and method
US20050198095A1 (en) 2003-12-31 2005-09-08 Kavin Du System and method for obtaining information relating to an item of commerce using a portable imaging device
US20060009702A1 (en) 2004-04-30 2006-01-12 Olympus Corporation User support apparatus
US20060107219A1 (en) 2004-05-26 2006-05-18 Motorola, Inc. Method to enhance user interface and target applications based on context awareness
US20110233278A1 (en) 2004-07-29 2011-09-29 Symbol Technologies, Inc. Point-of-transaction workstation for electro-optically reading one-dimensional and two-dimensional indicia by image capture
US20060031684A1 (en) 2004-08-06 2006-02-09 Sharma Ravi K Fast signal detection and distributed computing in portable computing devices
US20060047704A1 (en) 2004-08-31 2006-03-02 Kumar Chitra Gopalakrishnan Method and system for providing information services relevant to visual imagery
US20120013766A1 (en) 2004-11-29 2012-01-19 Rothschild Trust Holdings, Llc Device and method for embedding and retrieving information in digital images
US20100036717A1 (en) 2004-12-29 2010-02-11 Bernard Trest Dynamic Information System
US7774504B2 (en) 2005-01-19 2010-08-10 Truecontext Corporation Policy-driven mobile forms applications
US20090002491A1 (en) 2005-09-16 2009-01-01 Haler Robert D Vehicle-mounted video system with distributed processing
US20070100480A1 (en) 2005-10-28 2007-05-03 Microsoft Corporation Multi-modal device power/mode management
US20070112567A1 (en) 2005-11-07 2007-05-17 Scanscout, Inc. Techiques for model optimization for statistical pattern recognition
US20070192352A1 (en) 2005-12-21 2007-08-16 Levy Kenneth L Content Metadata Directory Services
US20120023060A1 (en) 2005-12-29 2012-01-26 Apple Inc. Electronic device with automatic mode switching
US20070286463A1 (en) 2006-06-09 2007-12-13 Sony Ericsson Mobile Communications Ab Media identification
US20100284617A1 (en) 2006-06-09 2010-11-11 Sony Ericsson Mobile Communications Ab Identification of an object in media and of related media objects
JP2008009120A (en) 2006-06-29 2008-01-17 Mitsubishi Electric Corp Remote controller and household electrical appliance
US20090031814A1 (en) 2006-10-23 2009-02-05 Kiyoaki Takiguchi Marker detection apparatus and marker detection method
US20080267504A1 (en) 2007-04-24 2008-10-30 Nokia Corporation Method, device and computer program product for integrating code-based and optical character recognition technologies into a mobile visual search
US20080296392A1 (en) 2007-05-31 2008-12-04 Connell Ii Jonathan H Portable device-based shopping checkout
US20090164896A1 (en) 2007-12-20 2009-06-25 Karl Ola Thorn System and method for dynamically changing a display
US20100070284A1 (en) * 2008-03-03 2010-03-18 Lg Electronics Inc. Method and an apparatus for processing a signal
US20100070272A1 (en) 2008-03-04 2010-03-18 Lg Electronics Inc. method and an apparatus for processing a signal
US20100004926A1 (en) 2008-06-30 2010-01-07 Waves Audio Ltd. Apparatus and method for classification and segmentation of audio content, based on the audio signal
US20110212717A1 (en) 2008-08-19 2011-09-01 Rhoads Geoffrey B Methods and Systems for Content Processing
US20100048242A1 (en) 2008-08-19 2010-02-25 Rhoads Geoffrey B Methods and systems for content processing
US20110153050A1 (en) 2008-08-26 2011-06-23 Dolby Laboratories Licensing Corporation Robust Media Fingerprints
US20100162105A1 (en) 2008-12-19 2010-06-24 Palm, Inc. Access and management of cross-platform calendars
US20100158310A1 (en) 2008-12-23 2010-06-24 Datalogic Scanning, Inc. Method and apparatus for identifying and tallying objects
US20100226526A1 (en) 2008-12-31 2010-09-09 Modro Sierra K Mobile media, devices, and signaling
US20100271365A1 (en) 2009-03-01 2010-10-28 Facecake Marketing Technologies, Inc. Image Transformation Systems and Methods
US20110043652A1 (en) 2009-03-12 2011-02-24 King Martin T Automatically providing content associated with captured information, such as information captured in real-time
US20100277611A1 (en) 2009-05-01 2010-11-04 Adam Holt Automatic content tagging, such as tagging digital images via a wireless cellular network using metadata and facial recognition
US20100318470A1 (en) 2009-05-13 2010-12-16 Christoph Meinel Means for Processing Information
US20120059780A1 (en) 2009-05-22 2012-03-08 Teknologian Tutkimuskeskus Vtt Context recognition in mobile devices
US20120143655A1 (en) 2009-06-30 2012-06-07 Kabushiki Kaisha Toshiba Checkout apparatus and working state measurement apparatus
US20110125735A1 (en) 2009-08-07 2011-05-26 David Petrou Architecture for responding to a visual query
US20120141660A1 (en) 2009-08-14 2012-06-07 Michael Fiedler Secure identification of a product
US20110098056A1 (en) * 2009-10-28 2011-04-28 Rhoads Geoffrey B Intuitive computing methods and systems
US20110131040A1 (en) 2009-12-01 2011-06-02 Honda Motor Co., Ltd Multi-mode speech recognition
US20110137895A1 (en) 2009-12-03 2011-06-09 David Petrou Hybrid Use of Location Sensor Data and Visual Query to Return Local Listings for Visual Query
US20110161076A1 (en) 2009-12-31 2011-06-30 Davis Bruce L Intuitive Computing Methods and Systems
US20110159921A1 (en) 2009-12-31 2011-06-30 Davis Bruce L Methods and arrangements employing sensor-equipped smart phones
US20110244919A1 (en) 2010-03-19 2011-10-06 Aller Joshua V Methods and Systems for Determining Image Processing Operations Relevant to Particular Imagery
WO2011116309A1 (en) 2010-03-19 2011-09-22 Digimarc Corporation Intuitive computing methods and systems
US20120078397A1 (en) 2010-04-08 2012-03-29 Qualcomm Incorporated System and method of smart audio logging for mobile devices
US20130254422A2 (en) 2010-05-04 2013-09-26 Soundhound, Inc. Systems and Methods for Sound Recognition
US20120034904A1 (en) 2010-08-06 2012-02-09 Google Inc. Automatically Monitoring for Voice Input Based on Context
US20120075168A1 (en) 2010-09-14 2012-03-29 Osterhout Group, Inc. Eyepiece with uniformly illuminated reflective display
US20120284012A1 (en) 2010-11-04 2012-11-08 Rodriguez Tony F Smartphone-Based Methods and Systems
US20120224743A1 (en) 2011-03-04 2012-09-06 Rodriguez Tony F Smartphone-based methods and systems
US20130007201A1 (en) 2011-06-29 2013-01-03 Gracenote, Inc. Interactive streaming content apparatus, systems and methods
US20130044233A1 (en) 2011-08-17 2013-02-21 Yang Bai Emotional illumination, and related arrangements
US20130090926A1 (en) 2011-09-16 2013-04-11 Qualcomm Incorporated Mobile device context information using speech detection
WO2013043393A1 (en) 2011-09-23 2013-03-28 Digimarc Corporation Context-based smartphone sensor logic
US20130097630A1 (en) 2011-10-14 2013-04-18 Tony F. Rodriguez Arrangements employing content identification and/or distribution identification data

Non-Patent Citations (28)

* Cited by examiner, † Cited by third party
Title
Alin, Object Tracing with Iphone 3G's, dated Feb. 16, 2010.
Benbasat, et al, A framework for the automated generation of power-efficient classifiers for embedded sensor nodes, Proc. of the 5th Int'l Conf. on Embedded Networked Sensor Systems. ACM, 2007.
Brunette, et al., Some sensor network elements for ubiquitous computing, 4th Int'l. Symposium on Information Processing in Sensor Networks, pp. 388-392, 2005.
Chen, et al, "Listen-to-nose: a low-cost system to record nasal symptoms in daily life," 14.sup.th International Conference on Ubiquitous Computing, Sep. 5-8, 2012, pp. 590-591.
Csirik, et al Sequential Classifier Combination for Pattern Recognition in Wireless Sensor Networks, 10th Int'l Workshop on Multiple Classifier Systems, Jun. 2011.
Eagle, "Machine Perception and Learning of Complex Social Systems", dated Jun. 27, 2005.
Excerpts from prosecution of corresponding Chinese application 201280054460.1, including amended claims 1-46 on which prosecution is based, and translated Text of the First Action dated Mar. 27, 2015.
Excerpts from prosecution of corresponding European application 12833294.7, including amended claims 1-11 on which prosecution is based, Supplemental European Search Report dated Apr. 1, 2015, and Written Opinion dated Apr. 10, 2015.
Excerpts from prosecution of U.S. Appl. No. 14/189,236, including Action dated Aug. 20, 2014, Response dated Nov. 19, 2014, Final Action dated Feb. 5, 2015, Pre-Brief Conference Request dated May 11, 2015, and Pre-Appeal Conference Decision dated May 21, 2015.
Kang et al, Orchestrator-An Active Resource Orchestration Framework for Mobile Context Monitoring in Sensor-Rich Mobile Environments, IEEE Conf. on Pervasive Computing and Communications, pp. 135-144, 2010.
Lane, et al., A Survey of Mobile Phone Sensing, IEEE Communications Magazine, 48.9, pp. 140-150, 2010.
Larson, et al., "SpiroSmart: using a microphone to measure lung function on a mobile phone," 14.sup.th International Conference on Ubiquitous Computing, Sep. 5-8, 2012, pp. 280-289.
Lu, et al, SpeakerSense: Energy Efficient Unobtrusive Speaker Identification on Mobile Phones, Pervasive Computing Conference, Jun. 2011, pp. 188-205.
Lu, et al., "StressSense: Detecting stress in unconstrained acoustic environments using smartphones," 14.sup.th International Conference on Ubiquitous Computing, Sep. 5-8, 2012, pp. 351-360.
Miluzzo, "Sensing Meets Mobile Social Network: The Design, Implementation and Evaluation of the CenceMe Application", dated Nov. 5, 2008.
Miluzzo, "Smartphone Sensing", Dated Jun. 2011.
Misra, et al, Optimizing Sensor Data Acquisition for Energy-Efficient Smartphone-based Continuous Event Processing, 12th IEEE International Conf. on Mobile Data Management, Jun. 2011.
Modro, et al, Digital Watermarking Opportunities Enabled by Mobile Media Proliferation, Proc. SPIE, vol. 7254, Jan. 2009.
Nishihara et al, Power Savings in Mobile Devices Using Context-Aware Resource Control, IEEE Conf. on Networking and Computing, 2010, pp. 220-226.
PCT International Search Report and PCT Written Opinion of the International Searching Authority, PCT/US12/54232, mailed Nov. 14, 2012.
Priyantha, et al, Eers-Energy Efficient Responsive Sleeping on Mobile Phones, Workshop on Sensing for App Phones, 2010.
Prosecution excerpts from Japanese patent application P2014-531853, namely pending claims, and English translation of Notice of Reasons for Rejection dated Oct. 25, 2016.
Rodriguez, et al, Evolution of Middleware to Support Mobile Discovery, presented at MobiSense, Jun. 2011.
Soria-Morillo et al, Mobile Architecture for Communication and Development of Applications Based on Context, 12th IEEE Int'l Conf. on Mobile Data Management, Jun. 2011.
Tulusan, et al., "Lullaby: a capture & access systems for understanding the sleep environment," 14.sup.th International Conference on Ubiquitous Computing, Sep. 5-8, 2012, pp. 226-234.
Wallach, "Smartphone Security: Trends and Predictions", dated Feb. 17, 2011.
Wang, et al, A Framework of Energy Efficient Mobile Sensing for Automatic User State Recognition, MOBISYS 2009, Jun. 2009, 14 pp.
Yatani et al, "BodyScope: a wearable acoustic sensor for activity recognition," 14.sup.th International Conference on Ubiquitous Computing, Sep. 5-8, 2012, pp. 341-350.

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10199042B2 (en) 2011-04-04 2019-02-05 Digimarc Corporation Context-based smartphone sensor logic
US10510349B2 (en) 2011-04-04 2019-12-17 Digimarc Corporation Context-based smartphone sensor logic
US10930289B2 (en) 2011-04-04 2021-02-23 Digimarc Corporation Context-based smartphone sensor logic
US11049094B2 (en) 2014-02-11 2021-06-29 Digimarc Corporation Methods and arrangements for device to device communication

Also Published As

Publication number Publication date
US20160232898A1 (en) 2016-08-11
CN103918247B (en) 2016-08-24
EP2758956B1 (en) 2021-03-10
KR20140064969A (en) 2014-05-28
US10510349B2 (en) 2019-12-17
US10930289B2 (en) 2021-02-23
CN103918247A (en) 2014-07-09
US20170243584A1 (en) 2017-08-24
JP6251906B2 (en) 2017-12-27
US10199042B2 (en) 2019-02-05
US20130150117A1 (en) 2013-06-13
US9196028B2 (en) 2015-11-24
US20190237082A1 (en) 2019-08-01
WO2013043393A1 (en) 2013-03-28
EP2758956A1 (en) 2014-07-30
JP2015501438A (en) 2015-01-15
EP2758956A4 (en) 2015-05-06
US20180130472A1 (en) 2018-05-10
US20200227048A1 (en) 2020-07-16

Similar Documents

Publication Publication Date Title
US10930289B2 (en) Context-based smartphone sensor logic
EP3622510B1 (en) Intercom-style communication using multiple computing devices
US11361770B2 (en) Detecting user identity in shared audio source contexts
CN107924506B (en) Method, system and computer storage medium for inferring user availability
US11580501B2 (en) Automatic detection and analytics using sensors
KR101528086B1 (en) System and method for providing conference information
US9959885B2 (en) Method for user context recognition using sound signatures
CN104170413B (en) Based on the application program in environmental context control mobile device
CN104035995B (en) Group's label generating method and device
US20140136451A1 (en) Determining Preferential Device Behavior
CN110147467A (en) A kind of generation method, device, mobile terminal and the storage medium of text description
CN111241822A (en) Emotion discovery and dispersion method and device under input scene
WO2019212729A1 (en) Generating response based on user's profile and reasoning on contexts
TW202145064A (en) Object counting method electronic equipment computer readable storage medium
CN111753917A (en) Data processing method, device and storage medium
CN114333804A (en) Audio classification identification method and device, electronic equipment and storage medium
US20230368113A1 (en) Managing disruption between activities in common area environments
Rossi et al. Collaborative personal speaker identification: A generalized approach
CN110134938A (en) Comment and analysis method and device
Choujaa et al. Activity recognition from mobile phone data: State of the art, prospects and open problems
CN115510336A (en) Information processing method, information processing device, electronic equipment and storage medium
CN110245343A (en) Barrage analysis method and device

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4