-
Notifications
You must be signed in to change notification settings - Fork 24
Mic button #85
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mic button #85
Conversation
|
Important Review skippedAuto reviews are disabled on this repository. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the WalkthroughAdds phone microphone audio streaming to the mobile app with live visualization and Android foreground service support. Introduces a new UI button, a recording hook, enhanced WebSocket streaming/reconnect logic, config plugins/permissions, and documentation updates. Updates build targets and dependencies, and adds a custom Expo config plugin for Android foreground services. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant UI as PhoneAudioButton
participant Rec as usePhoneAudioRecorder
participant WS as useAudioStreamer
participant FGS as Notifee/FG Service
participant BE as Backend (/ws_pcm)
User->>UI: Tap "Stream Phone Audio"
UI->>WS: startStreaming(wsUrl, jwt?)
WS->>FGS: ensureFgsRegistered + startForegroundServiceNotification
WS->>BE: Open WebSocket
BE-->>WS: onopen
WS->>WS: send audio-start + start heartbeat
WS-->>UI: set streaming=true
UI->>Rec: startRecording(onPCM)
Rec-->>UI: isRecording=true, audioLevel updates
Rec-->>WS: onPCM(buffer)
WS->>BE: send PCM frames
alt Manual stop
User->>UI: Tap "Stop Phone Audio"
UI->>Rec: stopRecording()
UI->>WS: stopStreaming()
WS->>BE: audio-stop + close(1000,"manual-stop")
WS->>FGS: stopForegroundServiceNotification
else Unexpected close/error
BE-->>WS: close/error
WS->>WS: schedule attemptReconnect (exp backoff)
WS->>UI: error state
end
sequenceDiagram
autonumber
participant Net as NetInfo
participant WS as useAudioStreamer
participant BE as Backend
Net-->>WS: becomes online
WS->>WS: if ws not OPEN and url known
WS->>BE: reconnect (exp backoff)
BE-->>WS: onopen -> resume streaming
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Pre-merge checks (1 passed, 2 warnings)❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
Poem
✨ Finishing Touches🧪 Generate unit tests
Comment |
|
@coderabbitai review |
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
app/app.json (1)
31-40: Add Android 14+ microphone FGS permission and service type
You must declareandroid.permission.FOREGROUND_SERVICE_MICROPHONEalongsideRECORD_AUDIO, and addandroid:foregroundServiceType="microphone"to your service. On Android 14+ targeting API 34+, missing either throws aMissingForegroundServiceTypeExceptionorSecurityException.Update app/app.json:
"permissions": [ "android.permission.BLUETOOTH", "android.permission.BLUETOOTH_ADMIN", "android.permission.BLUETOOTH_CONNECT", "android.permission.ACCESS_NETWORK_STATE", "android.permission.FOREGROUND_SERVICE", "android.permission.FOREGROUND_SERVICE_DATA_SYNC", "android.permission.POST_NOTIFICATIONS", - "android.permission.RECORD_AUDIO" + "android.permission.RECORD_AUDIO", + "android.permission.FOREGROUND_SERVICE_MICROPHONE" ],Then, in your plugin’s
<service>block, add:<service android:name="…" android:foregroundServiceType="microphone"> … </service>
🧹 Nitpick comments (23)
README.md (1)
15-22: Clarify the backend WebSocket endpoint in the quick-start.To reduce setup friction, explicitly show the Advanced Backend endpoint used by phone audio and an example URL.
**Mobile App + Phone Audio (Latest Feature):** 1. **Setup Backend**: Follow Advanced Backend setup above -2. **Install Mobile App**: Go to `app/` directory and run `npm install && npm start` +2. **Install Mobile App**: Go to `app/` directory and run `npm install && npm start` 3. **Configure Mobile App**: Point to your backend IP in app settings + - Example (Advanced Backend): `ws://[your-ip]:8000/ws_pcm` 4. **Enable Phone Audio**: Tap "Stream Phone Audio" in app for direct microphone streaming 5. **Grant Permissions**: Allow microphone access when prompted 6. **Start Streaming**: Speak into phone for real-time processing with live audio visualizationapp/README.md (6)
120-124: Endpoint is correct; unify Simple Backend path across docs.Advanced
/ws_pcmis correct here. Minor: earlier in this file Line 115 referencesbackends/simple/, while the root README usesbackends/simple-backend/. Please standardize to one path to avoid confusion.Would you like me to scan the repo and list all occurrences of the two paths so you can choose one?
154-165: Call out Android foreground service/notifications requirement for background streaming.Users on Android 13+ will need notifications enabled; foreground service is required for background-safe streaming.
### Features - **Direct Microphone Access**: Use phone's built-in microphone - **Real-time Audio Streaming**: Live audio processing with visualization - **Seamless Integration**: Switch between Bluetooth and phone audio modes - **Cross-Platform**: Works on both iOS and Android - **Live Audio Meters**: Visual feedback showing audio levels in real-time +- **Android Background Safe**: Requires enabling notifications and running a foreground service (Notifee)
166-174: Add Android notification permission step.This helps avoid silent failures on Android 13+.
#### Enable Phone Audio Streaming 1. **Open Friend-Lite app** 2. **Configure Backend Connection** (see Backend Configuration section) 3. **Grant Microphone Permissions** when prompted +4. **Allow Notifications (Android 13+)** when prompted so background streaming works 4. **Tap "Stream Phone Audio" button** in main interface 5. **Start speaking** - audio streams in real-time to backend
186-207: Add Android battery optimization tip and a quick connectivity test.These are common causes for intermittent streaming.
#### Audio Not Streaming - **Check Permissions**: Ensure microphone access granted - **Verify Backend URL**: Confirm `ws://[ip]:8000/ws_pcm` format - **Network Connection**: Test backend connectivity - **Authentication**: Verify JWT token is valid +- **Disable Battery Optimizations (Android)**: Allow app to run unrestricted in background #### Poor Audio Quality - **Check Signal Strength**: Ensure stable network connection - **Reduce Background Noise**: Use in quiet environment - **Restart Recording**: Stop and restart phone audio streaming + - **Test WebSocket**: From a laptop on the same network, test `ws://[backend-ip]:8000/ws_pcm` connectivity
232-241: Show a concrete ws url example (with optional JWT/device).This reduces config errors and helps multi-device setups.
#### Option 2: Phone Audio Streaming 1. **Configure backend connection** (see Configuration Steps above) 2. **Enable phone audio**: - Tap "Stream Phone Audio" button - Grant microphone permissions when prompted 3. **Start speaking**: - Speak directly into phone microphone - Watch real-time audio level visualization - - Audio streams to backend automatically + - Audio streams to backend automatically + - Example URL: `wss://your-domain/ws_pcm?token=<JWT>&device=<YourPhoneName>`
267-273: Mention Android notifications permission here too.Troubleshooting should include POST_NOTIFICATIONS for Android 13+.
**Phone Audio Streaming Issues:** - Grant microphone permissions in device settings +- On Android 13+: ensure "Notifications" permission is granted - Ensure stable network connection to backend - Restart phone audio streaming if no data flowing - Check backend logs for audio data reception - Verify JWT authentication token is validapp/package.json (1)
15-15: Remove unused @react-native/virtualized-lists dependency
No imports of the external package were found; drop it from app/package.json to avoid bundling duplicates.
app/package.json:15app/plugins/with-ws-fgs.js (1)
17-35: Normalize permission array and include RECORD_AUDIO when microphone=true.Minor hardening: ensure
uses-permissionis always an array to avoid shape edge cases; also addRECORD_AUDIOwhenmicrophoneis enabled to make this plugin self-contained.- // Ensure uses-permission array exists - manifest.manifest['uses-permission'] ||= []; + // Ensure uses-permission array exists and is normalized + const ensureArray = (v) => (Array.isArray(v) ? v : v ? [v] : []); + manifest.manifest['uses-permission'] = ensureArray( + manifest.manifest['uses-permission'] + ); @@ addPerm('android.permission.FOREGROUND_SERVICE'); addPerm('android.permission.FOREGROUND_SERVICE_DATA_SYNC'); // we use this for the WS keep-alive addPerm('android.permission.POST_NOTIFICATIONS'); if (microphone) { addPerm('android.permission.FOREGROUND_SERVICE_MICROPHONE'); - // If you actually record audio, you'll also need RECORD_AUDIO elsewhere. + addPerm('android.permission.RECORD_AUDIO'); }app/app/components/PhoneAudioButton.tsx (2)
93-98: Add accessibility props for screen readers.Expose role/label/state for VoiceOver/TalkBack.
Apply this diff:
<TouchableOpacity style={getButtonStyle()} onPress={onPress} disabled={isDisabled || isInitializing} activeOpacity={0.7} + accessibilityRole="button" + accessibilityLabel={ + isInitializing + ? 'Initializing phone audio' + : isRecording + ? 'Stop phone audio' + : 'Start phone audio stream' + } + accessibilityState={{ disabled: isDisabled || isInitializing, busy: isInitializing }} >
78-83: Consider replacing emoji icons with vector icons for consistent cross-platform rendering.Emoji style varies by OEM/font. Using
@expo/vector-icons(e.g., Ionicons mic/mic-off) yields consistent visuals and better accessibility labels.app/app.json (1)
84-89: Avoid duplicate/blanket cleartext traffic in production.
usesCleartextTrafficis set both at the top-level Android config and viaexpo-build-properties. Prefer a single source and keep it off in release builds if possible.Also applies to: 41-41
app/app/hooks/usePhoneAudioRecorder.ts (4)
3-11: Prune unused imports and destructured values.
Platform,AudioRecording,AudioAnalysis,pauseRecording, andresumeRecordingare unused.Apply this diff:
-import { Alert, Platform } from 'react-native'; +import { Alert } from 'react-native'; @@ -import { - useAudioRecorder, - AudioRecording, - AudioAnalysis, - ExpoAudioStreamModule, -} from '@siteed/expo-audio-studio'; +import { useAudioRecorder, ExpoAudioStreamModule } from '@siteed/expo-audio-studio'; @@ const { startRecording: startRecorderInternal, stopRecording: stopRecorderInternal, isRecording, - pauseRecording, - resumeRecording, analysisData, } = useAudioRecorder();Also applies to: 41-48
97-118: Remove dead code:checkPermissionsis never used.Either call it in
startRecordingor delete it. Given you already request permissions instartRecording, safest is to remove it.Apply this diff:
- // Check and request microphone permissions - const checkPermissions = useCallback(async (): Promise<boolean> => { - try { - const { granted } = await ExpoAudioStreamModule.getPermissionsAsync(); - if (granted) { - return true; - } - - const { granted: newGranted } = await ExpoAudioStreamModule.requestPermissionsAsync(); - if (!newGranted) { - Alert.alert( - 'Microphone Permission Required', - 'Please enable microphone access in your device settings to use phone audio streaming.', - [{ text: 'OK' }] - ); - return false; - } - return true; - } catch (error) { - console.error('[PhoneAudioRecorder] Permission check error:', error); - return false; - } - }, []);
140-162: Single source of truth for recording config to prevent drift.You define
RECORDING_CONFIGbut handcraftconfig. Spread the constant and override specifics.Apply this diff:
@@ - // EXACT config from 2025 guide + processing for audio levels - const config = { - interval: 100, - sampleRate: 16000, - channels: 1, - encoding: "pcm_16bit" as const, + // Base config with analysis enabled + const config = { + ...RECORDING_CONFIG, enableProcessing: true, // Enable audio analysis for live RMS - intervalAnalysis: 500, // Analysis every 500ms + intervalAnalysis: 500, // Analysis every 500ms onAudioStream: (event: AudioDataEvent) => { if (!mountedRef.current) return; const pcmBuffer = processAudioDataEvent(event); if (onAudioDataRef.current && pcmBuffer && pcmBuffer.length > 0) { onAudioDataRef.current(pcmBuffer); } } };Also applies to: 23-30
54-67: Reduce extremely chatty logging in hot paths.Per-frame/100ms logs will impact perf and may expose sensitive telemetry. Gate with
__DEV__or remove.Also applies to: 186-214, 218-224
app/app/hooks/useAudioStreamer.ts (4)
56-79: Scope Notifee/FGS to Android only.Avoid showing notifications or requesting permissions on iOS where FGS isn’t applicable. Keeps UX clean and prevents no-op permission prompts.
async function startForegroundServiceNotification(title: string, body: string) { + if (Platform.OS !== 'android') return; ensureFgsRegistered(); await ensureNotificationPermission(); // Create channel if needed await notifee.createChannel({ @@ async function stopForegroundServiceNotification() { - try { + if (Platform.OS !== 'android') return; + try { await notifee.stopForegroundService(); } catch {} try { await notifee.cancelNotification(FGS_NOTIFICATION_ID); } catch {} } @@ const notifyInfo = useCallback(async (title: string, body: string) => { - try { + if (Platform.OS !== 'android') return; + try { await notifee.displayNotification({ title, body, android: { channelId: FGS_CHANNEL_ID }, });Also applies to: 81-88, 122-134
235-240: Handle NetInfo isInternetReachable === null.Treat “unknown reachability” as online to avoid false negatives at app start.
- const netState = await NetInfo.fetch(); - if (!netState.isConnected || !netState.isInternetReachable) { + const netState = await NetInfo.fetch(); + const online = !!netState.isConnected && (netState.isInternetReachable !== false); + if (!online) { const errorMsg = 'No internet connection.'; setStateSafe(setError, errorMsg); return Promise.reject(new Error(errorMsg)); }
231-247: Clear pending reconnect timer before manual start.Prevents a queued auto-reconnect from racing a user-initiated start.
currentUrlRef.current = trimmed; manuallyStoppedRef.current = false; + // Cancel any pending scheduled reconnect + if (reconnectTimeoutRef.current) { + clearTimeout(reconnectTimeoutRef.current); + reconnectTimeoutRef.current = null; + } + // Network gate const netState = await NetInfo.fetch();
348-361: Nit: return the NetInfo unsubscribe directly.Slightly cleaner and avoids wrapping a stable function.
- const sub = NetInfo.addEventListener(state => { + const sub = NetInfo.addEventListener(state => { ... - }); - return () => sub(); + }); + return sub;app/app/index.tsx (3)
318-349: DRY: Extract WS PCM URL builder (http→ws, /ws_pcm, token/device params).This logic will likely be reused; centralize to a small util to reduce bugs and drift across features.
Add utility (new file app/app/utils/ws.ts):
export function buildWsPcmUrl(baseUrl: string, params?: Record<string, string>) { let url = (baseUrl || '').trim() .replace(/^http:/, 'ws:') .replace(/^https:/, 'wss:') .replace(/\/$/, ''); if (!/\/ws_pcm(?:$|[/?])/i.test(url)) url = `${url}/ws_pcm`; if (!params || Object.keys(params).length === 0) return url; const qs = new URLSearchParams(params); return `${url}${url.includes('?') ? '&' : '?'}${qs.toString()}`; }Then replace here:
- let finalWebSocketUrl = webSocketUrl.trim(); - - // Convert HTTP/HTTPS to WS/WSS protocol - finalWebSocketUrl = finalWebSocketUrl.replace(/^http:/, 'ws:').replace(/^https:/, 'wss:'); - - // Ensure /ws_pcm endpoint is included - if (!finalWebSocketUrl.includes('/ws_pcm')) { - // Remove trailing slash if present, then add /ws_pcm - finalWebSocketUrl = finalWebSocketUrl.replace(/\/$/, '') + '/ws_pcm'; - } - - // Check if this is the advanced backend (requires authentication) or simple backend - const isAdvancedBackend = jwtToken && isAuthenticated; - - if (isAdvancedBackend) { - // Advanced backend: include JWT token and device parameters - const params = new URLSearchParams(); - params.append('token', jwtToken); - - const deviceName = userId && userId.trim() !== '' ? userId.trim() : 'phone-mic'; - params.append('device_name', deviceName); - console.log('[App.tsx] Using advanced backend with token and device_name:', deviceName); - - const separator = finalWebSocketUrl.includes('?') ? '&' : '?'; - finalWebSocketUrl = `${finalWebSocketUrl}${separator}${params.toString()}`; - console.log('[App.tsx] Advanced backend WebSocket URL constructed for phone audio'); - } else { - // Simple backend: use URL as-is without authentication - console.log('[App.tsx] Using simple backend without authentication for phone audio'); - } + const isAdvancedBackend = jwtToken && isAuthenticated; + const deviceName = userId && userId.trim() !== '' ? userId.trim() : 'phone-mic'; + const finalWebSocketUrl = buildWsPcmUrl(webSocketUrl, isAdvancedBackend ? { token: jwtToken!, device_name: deviceName } : undefined); + if (isAdvancedBackend) { + console.log('[App.tsx] Advanced backend WebSocket URL constructed for phone audio'); + } else { + console.log('[App.tsx] Using simple backend without authentication for phone audio'); + }
291-296: Avoid awaiting inside hot audio callbacks to reduce backpressure.Let WS sends proceed without blocking the recorder loop; handle errors in the background.
- if (wsReadyState === WebSocket.OPEN && audioBytes.length > 0) { - await audioStreamer.sendAudio(audioBytes); - } + if (wsReadyState === WebSocket.OPEN && audioBytes.length > 0) { + void audioStreamer.sendAudio(audioBytes).catch(e => console.warn('[App.tsx] sendAudio (OMI) failed:', e?.message || e)); + }- if (wsReadyState === WebSocket.OPEN && pcmBuffer.length > 0) { - await audioStreamer.sendAudio(pcmBuffer); - } + if (wsReadyState === WebSocket.OPEN && pcmBuffer.length > 0) { + void audioStreamer.sendAudio(pcmBuffer).catch(e => console.warn('[App.tsx] sendAudio (phone) failed:', e?.message || e)); + }Also applies to: 354-359
119-129: Await stopStreaming where possible to flush audio-stop and close cleanly.Makes shutdown deterministic and avoids races with queued reconnect timers.
- console.log('[App.tsx] Disconnect: Stopping custom audio streaming.'); - audioStreamer.stopStreaming(); + console.log('[App.tsx] Disconnect: Stopping custom audio streaming.'); + await audioStreamer.stopStreaming();await originalStopAudioListener(); - audioStreamer.stopStreaming(); + await audioStreamer.stopStreaming();- await phoneAudioRecorder.stopRecording(); - audioStreamer.stopStreaming(); + await phoneAudioRecorder.stopRecording(); + await audioStreamer.stopStreaming();- refs.stopAudioStreaming(); + void refs.stopAudioStreaming();- if (audioStreamer.isStreaming) audioStreamer.stopStreaming(); + if (audioStreamer.isStreaming) await audioStreamer.stopStreaming();- if (audioStreamer.isStreaming) audioStreamer.stopStreaming(); + if (audioStreamer.isStreaming) await audioStreamer.stopStreaming();Also applies to: 306-309, 374-378, 420-423, 301-302, 367-370
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
⛔ Files ignored due to path filters (2)
app/assets/notification-icon.pngis excluded by!**/*.pngapp/package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (9)
README.md(1 hunks)app/README.md(7 hunks)app/app.json(3 hunks)app/app/components/PhoneAudioButton.tsx(1 hunks)app/app/hooks/useAudioStreamer.ts(3 hunks)app/app/hooks/usePhoneAudioRecorder.ts(1 hunks)app/app/index.tsx(6 hunks)app/package.json(1 hunks)app/plugins/with-ws-fgs.js(1 hunks)
🧰 Additional context used
🧠 Learnings (3)
📚 Learning: 2025-09-04T08:38:51.927Z
Learnt from: thestumonkey
PR: AnkushMalaker/friend-lite#77
File: backends/advanced/webui/src/hooks/useAudioRecording.ts:134-171
Timestamp: 2025-09-04T08:38:51.927Z
Learning: The WebSocket URL construction logic in both useAudioRecording.ts and useSimpleAudioRecording.ts is nearly identical and should be extracted into a shared utility function to eliminate code duplication and centralize the complex SSL/WSS protocol handling logic.
Applied to files:
app/app/index.tsx
📚 Learning: 2025-09-05T06:06:37.640Z
Learnt from: CR
PR: AnkushMalaker/friend-lite#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-05T06:06:37.640Z
Learning: Applies to backends/advanced/src/main.py : Advanced backend /ws_pcm endpoint must parse Wyoming events and only process audio chunks within an active session (after audio-start until audio-stop)
Applied to files:
app/README.md
📚 Learning: 2025-09-05T06:06:37.640Z
Learnt from: CR
PR: AnkushMalaker/friend-lite#0
File: CLAUDE.md:0-0
Timestamp: 2025-09-05T06:06:37.640Z
Learning: Applies to app/**/*.{ts,tsx,js,jsx} : Mobile app WebSocket streaming must implement the Wyoming protocol: send JSONL header (audio-start/chunk/stop) followed by binary payloads
Applied to files:
app/app/hooks/useAudioStreamer.ts
🪛 GitHub Actions: iOS IPA Build
app/app.json
[error] 1-1: Missing ios.infoPlist.ITSAppUsesNonExemptEncryption boolean in app.json. Manual configuration is required in App Store Connect before the app can be tested.
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (1)
- GitHub Check: build
🔇 Additional comments (12)
app/README.md (5)
8-8: Phone Audio feature callout looks good.Matches the new UI/hook flow.
13-13: Live visualization note is clear.Good alignment with the visualization UI.
181-185: Source switching guidance looks good.Clear UX expectations.
222-222: Bluetooth option header update is fine.
367-367: Link fix to Advanced Backend looks correct.app/package.json (4)
12-12: Adding Notifee is appropriate for FGS support.Ensure runtime request for notifications is implemented where background streaming is used.
If not already, confirm you call
notifee.requestPermission()on Android 13+ during setup.
16-16: Audio capture dep looks appropriate.Matches the new mic recording flow.
19-20: Expo build properties and dev client look consistent with SDK 53.Docs already include prebuild/dev-client steps.
26-27: Versions validated: Expo SDK 53 ships React 19.0.0 with React Native 0.79.x, so the react + react-native entries in app/package.json are correctly paired.app/plugins/with-ws-fgs.js (1)
12-53: Solid manifest plugin for Notifee FGS types and permissions.Good use of withAndroidManifest, conditional microphone type, and idempotent permission/service injection.
app/app.json (1)
85-86: SDK 35 support confirmed. Expo SDK 53 uses compileSdkVersion 35 and targetSdkVersion 35; Notifee requires compileSdkVersion ≥ 34, so 35 is supported. CI green indicates no action needed.app/app/hooks/usePhoneAudioRecorder.ts (1)
219-224: The verification scripts are running; I’ll review theuseAudioRecorderdefinition and itsrmsdata type once they complete.
| "infoPlist": { | ||
| "NSMicrophoneUsageDescription": "Friend-Lite needs access to your microphone to stream audio to the backend for processing." | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix iOS build failure: add ITSAppUsesNonExemptEncryption.
The pipeline error indicates this key is missing. Set it explicitly (typically false unless you ship custom encryption).
Apply this diff:
"infoPlist": {
"NSMicrophoneUsageDescription": "Friend-Lite needs access to your microphone to stream audio to the backend for processing."
+ ,"ITSAppUsesNonExemptEncryption": false
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "infoPlist": { | |
| "NSMicrophoneUsageDescription": "Friend-Lite needs access to your microphone to stream audio to the backend for processing." | |
| } | |
| "infoPlist": { | |
| "NSMicrophoneUsageDescription": "Friend-Lite needs access to your microphone to stream audio to the backend for processing." | |
| ,"ITSAppUsesNonExemptEncryption": false | |
| } |
🤖 Prompt for AI Agents
In app/app.json around lines 21 to 23, add the ITSAppUsesNonExemptEncryption key
to the infoPlist object (set to false unless your app ships custom encryption)
to satisfy iOS build pipeline requirements; update the JSON by inserting
"ITSAppUsesNonExemptEncryption": false alongside the existing
NSMicrophoneUsageDescription entry so the Info.plist includes this explicit key.
| "./plugins/with-ws-fgs", | ||
| { | ||
| "android": { | ||
| "permissions": [ | ||
| "FOREGROUND_SERVICE", | ||
| "FOREGROUND_SERVICE_DATA_SYNC", | ||
| "POST_NOTIFICATIONS" | ||
| ] | ||
| } | ||
| } | ||
| ], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Declare microphone FGS type in the custom plugin too.
Ensure the manifest gets a service with foregroundServiceType="microphone" and that the permission is listed.
Apply this diff:
[
"./plugins/with-ws-fgs",
{
"android": {
"permissions": [
"FOREGROUND_SERVICE",
"FOREGROUND_SERVICE_DATA_SYNC",
- "POST_NOTIFICATIONS"
+ "POST_NOTIFICATIONS",
+ "FOREGROUND_SERVICE_MICROPHONE"
]
}
}
],Then verify your plugin injects android:foregroundServiceType="microphone" on the service. If not, we should extend it.
🏁 Script executed:
#!/bin/bash
# Check plugin and generated AndroidManifest for microphone FGS type
rg -n "with-ws-fgs|foregroundServiceType.*microphone|FOREGROUND_SERVICE_MICROPHONE" -S -C2Length of output: 743
🏁 Script executed:
#!/bin/bash
# Display the custom plugin code to verify if it injects the service with foregroundServiceType="microphone"
sed -n '1,200p' app/plugins/with-ws-fgs.jsLength of output: 1910
Include the microphone flag and permission in your plugin config
In app/app.json, update the plugin entry to pass microphone: true and add the FOREGROUND_SERVICE_MICROPHONE permission:
[
"./plugins/with-ws-fgs",
{
+ "microphone": true,
"android": {
"permissions": [
"FOREGROUND_SERVICE",
"FOREGROUND_SERVICE_DATA_SYNC",
- "POST_NOTIFICATIONS"
+ "POST_NOTIFICATIONS",
+ "FOREGROUND_SERVICE_MICROPHONE"
]
}
}
],This ensures your custom plugin will inject android:foregroundServiceType="dataSync|microphone" and declare the microphone FGS permission.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "./plugins/with-ws-fgs", | |
| { | |
| "android": { | |
| "permissions": [ | |
| "FOREGROUND_SERVICE", | |
| "FOREGROUND_SERVICE_DATA_SYNC", | |
| "POST_NOTIFICATIONS" | |
| ] | |
| } | |
| } | |
| ], | |
| "./plugins/with-ws-fgs", | |
| { | |
| "microphone": true, | |
| "android": { | |
| "permissions": [ | |
| "FOREGROUND_SERVICE", | |
| "FOREGROUND_SERVICE_DATA_SYNC", | |
| "POST_NOTIFICATIONS", | |
| "FOREGROUND_SERVICE_MICROPHONE" | |
| ] | |
| } | |
| } | |
| ], |
🤖 Prompt for AI Agents
In app/app.json around lines 69 to 79, the plugin config for
"./plugins/with-ws-fgs" is missing the microphone flag and corresponding
permission; update the plugin object to include "microphone": true and add
"FOREGROUND_SERVICE_MICROPHONE" to the android.permissions array so the plugin
will inject android:foregroundServiceType="dataSync|microphone" and declare the
microphone FGS permission.
| React.useEffect(() => { | ||
| if (isRecording) { | ||
| // Create pulsing animation when recording | ||
| Animated.loop( | ||
| Animated.sequence([ | ||
| Animated.timing(pulseAnim, { | ||
| toValue: 1.1, | ||
| duration: 500, | ||
| useNativeDriver: true, | ||
| }), | ||
| Animated.timing(pulseAnim, { | ||
| toValue: 1, | ||
| duration: 500, | ||
| useNativeDriver: true, | ||
| }), | ||
| ]) | ||
| ).start(); | ||
| } else { | ||
| // Stop animation when not recording | ||
| pulseAnim.setValue(1); | ||
| } | ||
| }, [isRecording, pulseAnim]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Stop and cleanup the pulsing animation to avoid a runaway loop and leaks.
Animated.loop(...).start() is created each time recording starts, but never stopped. When isRecording flips to false, the loop keeps running under the hood. Track the loop in a ref, stop it when recording stops, and on unmount.
Apply this diff:
@@
- // Animated value for pulsing effect when recording
- const pulseAnim = React.useRef(new Animated.Value(1)).current;
+ // Animated value for pulsing effect when recording
+ const pulseAnim = React.useRef(new Animated.Value(1)).current;
+ const pulseLoopRef = React.useRef<Animated.CompositeAnimation | null>(null);
@@
- React.useEffect(() => {
- if (isRecording) {
- // Create pulsing animation when recording
- Animated.loop(
- Animated.sequence([
- Animated.timing(pulseAnim, {
- toValue: 1.1,
- duration: 500,
- useNativeDriver: true,
- }),
- Animated.timing(pulseAnim, {
- toValue: 1,
- duration: 500,
- useNativeDriver: true,
- }),
- ])
- ).start();
- } else {
- // Stop animation when not recording
- pulseAnim.setValue(1);
- }
- }, [isRecording, pulseAnim]);
+ React.useEffect(() => {
+ if (isRecording) {
+ if (!pulseLoopRef.current) {
+ pulseLoopRef.current = Animated.loop(
+ Animated.sequence([
+ Animated.timing(pulseAnim, { toValue: 1.1, duration: 500, useNativeDriver: true }),
+ Animated.timing(pulseAnim, { toValue: 1, duration: 500, useNativeDriver: true }),
+ ])
+ );
+ }
+ pulseLoopRef.current.start();
+ } else {
+ pulseLoopRef.current?.stop();
+ pulseLoopRef.current = null;
+ pulseAnim.setValue(1);
+ }
+ return () => {
+ pulseLoopRef.current?.stop();
+ pulseLoopRef.current = null;
+ };
+ }, [isRecording, pulseAnim]);Also applies to: 85-92
| style={[ | ||
| styles.audioLevelBar, | ||
| { width: `${Math.min(audioLevel * 100, 100)}%` }, | ||
| ]} | ||
| /> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Clamp audioLevel to [0,1] to prevent negative or >100% widths.
If audioLevel drifts outside [0,1], the bar width can be invalid.
Apply this diff:
- { width: `${Math.min(audioLevel * 100, 100)}%` },
+ { width: `${Math.max(0, Math.min(audioLevel, 1)) * 100}%` },📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| style={[ | |
| styles.audioLevelBar, | |
| { width: `${Math.min(audioLevel * 100, 100)}%` }, | |
| ]} | |
| /> | |
| style={[ | |
| styles.audioLevelBar, | |
| { width: `${Math.max(0, Math.min(audioLevel, 1)) * 100}%` }, | |
| ]} | |
| /> |
🤖 Prompt for AI Agents
In app/app/components/PhoneAudioButton.tsx around lines 115 to 119, the style
width uses audioLevel directly which can produce negative or >100% widths; clamp
audioLevel to the [0,1] range before converting to percent (e.g. use const
safeLevel = Number.isFinite(audioLevel) ? Math.max(0, Math.min(audioLevel, 1)) :
0 and then width: `${safeLevel * 100}%`) so the computed style always yields a
valid percentage.
| event.version = '1.0.0'; | ||
| event.payload_length = payload ? payload.length : null; | ||
|
|
||
| // Send JSON header with newline | ||
| const jsonHeader = JSON.stringify(event) + '\n'; | ||
| websocketRef.current.send(jsonHeader); | ||
| console.debug(`[AudioStreamer] Sent Wyoming event: ${event.type} (payload_length: ${event.payload_length})`); | ||
|
|
||
| // Send binary payload if exists | ||
| if (payload && payload.length > 0) { | ||
| websocketRef.current.send(payload); | ||
| console.debug(`[AudioStreamer] Sent audio payload: ${payload.length} bytes`); | ||
| } | ||
| if (payload?.length) websocketRef.current.send(payload); | ||
| } catch (e) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fix: send binary as ArrayBuffer (RN WebSocket doesn’t accept Uint8Array).
React Native WebSocket.send supports string or ArrayBuffer. Sending a Uint8Array risks runtime errors and/or wrong slice if the view doesn’t cover the whole buffer. Also prefer byteLength.
Apply:
- event.version = '1.0.0';
- event.payload_length = payload ? payload.length : null;
+ const payloadBytes = payload ? payload.byteLength : 0;
+ event.version = '1.0.0';
+ event.payload_length = payload ? payloadBytes : null;
const jsonHeader = JSON.stringify(event) + '\n';
websocketRef.current.send(jsonHeader);
- if (payload?.length) websocketRef.current.send(payload);
+ if (payload && payloadBytes) {
+ const ab =
+ payload.byteOffset === 0 && payloadBytes === payload.buffer.byteLength
+ ? payload.buffer
+ : payload.buffer.slice(payload.byteOffset, payload.byteOffset + payloadBytes);
+ websocketRef.current.send(ab);
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| event.version = '1.0.0'; | |
| event.payload_length = payload ? payload.length : null; | |
| // Send JSON header with newline | |
| const jsonHeader = JSON.stringify(event) + '\n'; | |
| websocketRef.current.send(jsonHeader); | |
| console.debug(`[AudioStreamer] Sent Wyoming event: ${event.type} (payload_length: ${event.payload_length})`); | |
| // Send binary payload if exists | |
| if (payload && payload.length > 0) { | |
| websocketRef.current.send(payload); | |
| console.debug(`[AudioStreamer] Sent audio payload: ${payload.length} bytes`); | |
| } | |
| if (payload?.length) websocketRef.current.send(payload); | |
| } catch (e) { | |
| // Calculate the exact byte length up front | |
| const payloadBytes = payload ? payload.byteLength : 0; | |
| event.version = '1.0.0'; | |
| event.payload_length = payload ? payloadBytes : null; | |
| const jsonHeader = JSON.stringify(event) + '\n'; | |
| websocketRef.current.send(jsonHeader); | |
| if (payload && payloadBytes) { | |
| // Ensure we send an ArrayBuffer, not a Uint8Array view | |
| const ab = | |
| payload.byteOffset === 0 && payloadBytes === payload.buffer.byteLength | |
| ? payload.buffer | |
| : payload.buffer.slice(payload.byteOffset, payload.byteOffset + payloadBytes); | |
| websocketRef.current.send(ab); | |
| } |
🤖 Prompt for AI Agents
In app/app/hooks/useAudioStreamer.ts around lines 142-148, the code sets
event.payload_length and sends a Uint8Array payload directly; update it to use
payload.byteLength for payload_length and send an ArrayBuffer view that exactly
covers the bytes (use payload.buffer.slice(payload.byteOffset,
payload.byteOffset + payload.byteLength)) since RN WebSocket.send accepts string
or ArrayBuffer (not Uint8Array) to avoid runtime errors or incorrect slices;
keep sending the jsonHeader string first, then send the sliced ArrayBuffer when
payload exists.
| onAudioStream: async (event: AudioDataEvent) => { | ||
| // EXACT payload handling from guide | ||
| const payload = typeof event.data === "string" | ||
| ? event.data | ||
| : Buffer.from(event.data as ArrayBuffer).toString("base64"); | ||
|
|
||
| // Convert to our expected format | ||
| if (onAudioDataRef.current && mountedRef.current) { | ||
| const pcmBuffer = processAudioDataEvent(event); | ||
| if (pcmBuffer && pcmBuffer.length > 0) { | ||
| onAudioDataRef.current(pcmBuffer); | ||
| } | ||
| } | ||
| } | ||
| }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove Node Buffer usage and the unused payload to prevent Metro bundling issues.
Buffer.from(...) is not guaranteed in RN/Expo and the computed payload is unused. This can break builds.
Apply this diff:
- onAudioStream: async (event: AudioDataEvent) => {
- // EXACT payload handling from guide
- const payload = typeof event.data === "string"
- ? event.data
- : Buffer.from(event.data as ArrayBuffer).toString("base64");
-
- // Convert to our expected format
- if (onAudioDataRef.current && mountedRef.current) {
- const pcmBuffer = processAudioDataEvent(event);
- if (pcmBuffer && pcmBuffer.length > 0) {
- onAudioDataRef.current(pcmBuffer);
- }
- }
- }
+ onAudioStream: (event: AudioDataEvent) => {
+ if (!mountedRef.current) return;
+ const pcmBuffer = processAudioDataEvent(event);
+ if (onAudioDataRef.current && pcmBuffer && pcmBuffer.length > 0) {
+ onAudioDataRef.current(pcmBuffer);
+ }
+ }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| onAudioStream: async (event: AudioDataEvent) => { | |
| // EXACT payload handling from guide | |
| const payload = typeof event.data === "string" | |
| ? event.data | |
| : Buffer.from(event.data as ArrayBuffer).toString("base64"); | |
| // Convert to our expected format | |
| if (onAudioDataRef.current && mountedRef.current) { | |
| const pcmBuffer = processAudioDataEvent(event); | |
| if (pcmBuffer && pcmBuffer.length > 0) { | |
| onAudioDataRef.current(pcmBuffer); | |
| } | |
| } | |
| } | |
| }; | |
| onAudioStream: (event: AudioDataEvent) => { | |
| if (!mountedRef.current) return; | |
| const pcmBuffer = processAudioDataEvent(event); | |
| if (onAudioDataRef.current && pcmBuffer && pcmBuffer.length > 0) { | |
| onAudioDataRef.current(pcmBuffer); | |
| } | |
| } |
🤖 Prompt for AI Agents
In app/app/hooks/usePhoneAudioRecorder.ts around lines 148 to 162, remove the
unused payload and the Node Buffer usage (Buffer.from(...)), which can break
Metro bundling in RN/Expo; simply delete the payload computation and any
branching that uses Buffer, and keep the existing logic that calls
processAudioDataEvent(event) and, if pcmBuffer exists and has length, invokes
onAudioDataRef.current(pcmBuffer); ensure no new Buffer or global Buffer
references remain in this function.
| }, | ||
| "devDependencies": { | ||
| "@babel/core": "^7.20.0", | ||
| "@react-native-community/cli": "latest", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Avoid unpinned “latest” for @react-native-community/cli.
This can silently break builds. Either remove (Expo manages the CLI) or pin to a known compatible version.
"devDependencies": {
"@babel/core": "^7.20.0",
- "@react-native-community/cli": "latest",
"@types/react": "~19.0.10",
"typescript": "~5.8.3"
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| "@react-native-community/cli": "latest", | |
| "devDependencies": { | |
| "@babel/core": "^7.20.0", | |
| "@types/react": "~19.0.10", | |
| "typescript": "~5.8.3" | |
| }, |
🤖 Prompt for AI Agents
In app/package.json around line 34, the dependency
"@react-native-community/cli": "latest" is unpinned which can silently break
builds; either remove the dependency entirely (since Expo manages the CLI) or
replace "latest" with a specific, tested version string (e.g. "6.0.0" or
whatever your CI currently uses). Update package.json accordingly, run your
package manager to regenerate lockfile, and verify the app builds locally/CI to
confirm compatibility.
| #### Requirements | ||
| - **iOS**: iOS 13+ with microphone permissions | ||
| - **Android**: Android API 21+ with microphone permissions | ||
| - **Network**: Stable connection to Friend-Lite backend | ||
| - **Backend**: Advanced backend running with `/ws_pcm` endpoint | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Verify minimum OS/API levels stated.
React Native/Expo baselines may be higher than iOS 13 / API 21 with RN 0.79 + Expo SDK 53.
🌐 Web query:
What minimum iOS version and Android API level are supported by Expo SDK 53 with React Native 0.79?
💡 Result:
Expo SDK 53 (React Native 0.79) requires iOS 15.1+ and Android 7.0+ (minSdkVersion API level 24); compile/target SDK = 35. [1][2]
Sources:
[1] Expo docs — Supported OS / SDK table.
[2] Expo SDK 53 changelog.
Update minimum OS/API levels in README
Change iOS requirement to 15.1+ and Android minSdkVersion to API level 24 (Android 7.0+) in app/README.md (lines 175–180) to align with Expo SDK 53 / React Native 0.79.
🤖 Prompt for AI Agents
In app/README.md around lines 175 to 180, the platform minimums are outdated;
update the iOS requirement to "iOS 15.1+" and change the Android requirement to
"Android API 24+ (Android 7.0+)" or "minSdkVersion 24" where the README
references Android API level, ensuring the text reflects Expo SDK 53 / React
Native 0.79 compatibility and keeps the existing notes about microphone
permissions and backend/network requirements unchanged.
Summary by CodeRabbit