| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
Also regarding destructor.
Fix https://github.com/profanity-im/profanity/issues/1159
|
|
|
|
|
|
| |
Another case of double free() due to new destructor.
Fixes https://github.com/profanity-im/profanity/issues/1156
|
| |
|
|
|
|
| |
Taken care of by the destructor.
|
|\
| |
| |
| |
| | |
Hotfix/omemo memleaks
Regards https://github.com/profanity-im/profanity/issues/1131
|
| | |
|
|/
|
|
| |
Regards https://github.com/profanity-im/profanity/issues/1148
|
|
|
|
|
| |
Let's use calloc instead of malloc and then setting almost all fields to
NULL.
|
|
|
|
| |
This is taken care of now in the destructor _pendingPresence_free().
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix:
```
==18682== 408 bytes in 17 blocks are definitely lost in loss record
3,279 of 3,632
==18682== at 0x483677F: malloc (in /usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==18682== by 0x42F602: roster_update_presence (roster_list.c:129)
==18682== by 0x448AA3: sv_ev_contact_online (server_events.c:906)
==18682== by 0x43D2BA: _available_handler (presence.c:674)
==18682== by 0x43C81B: _presence_handler (presence.c:398)
==18682== by 0x5AF118E: handler_fire_stanza (handler.c:124)
==18682== by 0x5AEDBDA: _handle_stream_stanza (conn.c:1253)
==18682== by 0x5AFA43E: _end_element (parser_expat.c:190)
==18682== by 0x6818AA4: doContent (xmlparse.c:2977)
==18682== by 0x681A3AB: contentProcessor (xmlparse.c:2552)
==18682== by 0x681D7EB: XML_ParseBuffer (xmlparse.c:1988)
==18682== by 0x681D7EB: XML_ParseBuffer (xmlparse.c:1957)
==18682== by 0x5AF0A63: xmpp_run_once (event.c:255)
==18682== by 0x432E5D: connection_check_events (connection.c:104)
==18682== by 0x4323B3: session_process_events (session.c:255)
==18682== by 0x42C097: prof_run (profanity.c:128)
==18682== by 0x4B25B9: main (main.c:172)
```
|
|
|
|
|
| |
Free is done in destructor now.
Regards https://github.com/profanity-im/profanity/issues/1148
|
|
|
|
|
|
|
|
|
| |
omemo_key_free() was called to free the key.
It free the key->data too. But in same cases this was not set yet. So
we need to set the data to NULL (or use calloc) at initialization so
that omemo_key_free() only frees it if it was actually allocated.
Regards https://github.com/profanity-im/profanity/issues/1148
|
| |
|
|\
| |
| | |
Fix several OMEMO related leaks
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
so far only the key part was freed. We also need to free the actual
handler.
Fix:
```
==21171== 1,128 bytes in 47 blocks are definitely lost in loss record
3,476 of 3,670
==21171== at 0x483677F: malloc (in
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==21171== by 0x434248: iq_id_handler_add (iq.c:265)
==21171== by 0x4B122E: omemo_devicelist_request (omemo.c:46)
==21171== by 0x4AC411: omemo_start_session (omemo.c:409)
==21171== by 0x4AC37C: omemo_start_sessions (omemo.c:396)
==21171== by 0x447881: sv_ev_roster_received (server_events.c:189)
==21171== by 0x444019: roster_result_handler (roster.c:312)
==21171== by 0x433FC2: _iq_handler (iq.c:202)
==21171== by 0x5AF118E: ??? (in /usr/lib64/libmesode.so.0.0.0)
==21171== by 0x5AEDBDA: ??? (in /usr/lib64/libmesode.so.0.0.0)
==21171== by 0x5AFA43E: ??? (in /usr/lib64/libmesode.so.0.0.0)
==21171== by 0x6818AA4: ??? (in /usr/lib64/libexpat.so.1.6.8)
==21171== by 0x681A3AB: ??? (in /usr/lib64/libexpat.so.1.6.8)
==21171== by 0x681D7EB: XML_ParseBuffer (in
/usr/lib64/libexpat.so.1.6.8)
==21171== by 0x5AF0A63: xmpp_run_once (in
/usr/lib64/libmesode.so.0.0.0)
==21171== by 0x432E5D: connection_check_events (connection.c:104)
==21171== by 0x4323B3: session_process_events (session.c:255)
==21171== by 0x42C097: prof_run (profanity.c:128)
==21171== by 0x4B2627: main (main.c:172)
```
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Fix:
```
==20561== 32 bytes in 1 blocks are definitely lost in loss record 1,467
of 3,678
==20561== at 0x483677F: malloc (in
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==20561== by 0x4B16C9: omemo_start_device_session_handle_bundle
(omemo.c:167)
==20561== by 0x43405E: _iq_handler (iq.c:214)
==20561== by 0x5AF118E: ??? (in /usr/lib64/libmesode.so.0.0.0)
==20561== by 0x5AEDBDA: ??? (in /usr/lib64/libmesode.so.0.0.0)
==20561== by 0x5AFA43E: ??? (in /usr/lib64/libmesode.so.0.0.0)
==20561== by 0x6818AA4: ??? (in /usr/lib64/libexpat.so.1.6.8)
==20561== by 0x681A3AB: ??? (in /usr/lib64/libexpat.so.1.6.8)
==20561== by 0x681D7EB: XML_ParseBuffer (in
/usr/lib64/libexpat.so.1.6.8)
==20561== by 0x5AF0A63: xmpp_run_once (in
/usr/lib64/libmesode.so.0.0.0)
==20561== by 0x432E5D: connection_check_events (connection.c:104)
==20561== by 0x4323B3: session_process_events (session.c:255)
==20561== by 0x42C097: prof_run (profanity.c:128)
==20561== by 0x4B260D: main (main.c:172)
```
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
In some conditions we just returned without freeing allocated variables.
Should fix following valgrind reported leak:
```
==17941== 19 bytes in 1 blocks are definitely lost in loss record 613 of
3,674
==17941== at 0x483677F: malloc (in
/usr/lib64/valgrind/vgpreload_memcheck-amd64-linux.so)
==17941== by 0x5BB0DAA: strdup (strdup.c:42)
==17941== by 0x4B1592: omemo_start_device_session_handle_bundle
(omemo.c:126)
==17941== by 0x43405E: _iq_handler (iq.c:214)
==17941== by 0x5AF118E: ??? (in /usr/lib64/libmesode.so.0.0.0)
==17941== by 0x5AEDBDA: ??? (in /usr/lib64/libmesode.so.0.0.0)
==17941== by 0x5AFA43E: ??? (in /usr/lib64/libmesode.so.0.0.0)
==17941== by 0x6818AA4: ??? (in /usr/lib64/libexpat.so.1.6.8)
==17941== by 0x681A3AB: ??? (in /usr/lib64/libexpat.so.1.6.8)
==17941== by 0x681D7EB: XML_ParseBuffer (in
/usr/lib64/libexpat.so.1.6.8)
==17941== by 0x5AF0A63: xmpp_run_once (in
/usr/lib64/libmesode.so.0.0.0)
==17941== by 0x432E5D: connection_check_events (connection.c:104)
==17941== by 0x4323B3: session_process_events (session.c:255)
==17941== by 0x42C097: prof_run (profanity.c:128)
==17941== by 0x4B2610: main (main.c:172)
```
|
|/ |
|
| |
|
| |
|
|
|
|
|
|
| |
In case that plain is NULL we need to copy over from body.
Fix https://github.com/profanity-im/profanity/issues/1144
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Probably missing copy of body to plain in carbon and privmessage.
Only covers the incoming message path because goal is OMEMO decryption
of untrusted message.
Cover some of the log functions but not all.
|
| |
|
|
|
|
| |
Use it to print message on red background if not trusted.
|
| |
|
|
|
|
| |
Free pubsub_event_handlers. Fix memory leaks.
|
|
|
|
| |
Free id_handlers. Fix memory leaks.
|
|
|
|
|
|
|
|
|
|
|
|
| |
We destory the roster in ev_disconnect_cleanup().
Adding a function to test if the roster has been destroyed and testing
for it in the statusbar.
So now when the connection is lost 'Lost connection' is printed in all
open windows.
We can then reconnect with `/connect accountname`.
Should fix https://github.com/profanity-im/profanity/issues/1083
|
|\
| |
| | |
Don't clear saved account data in session_disconnect()
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
If connection loss occurs, it calls session_disconnect() eventually.
This function clears saved account data which is required for
reconnection. Therefore, when reconnect timer expires, we get errors:
02/06/2019 04:53:42: stderr: ERR: (profanity:17115): GLib-CRITICAL **:
04:53:42.305: g_key_file_has_group: assertion
'group_name != NULL' failed
02/06/2019 04:53:43: prof: ERR: Unable to reconnect, account no longer
exists: (null)
To solve it, don't clear the saved data in session_disconnect(). It will
be cleared properly on connection loss if reconnect timer is not
configured. But won't be cleared with /disconnect command.
So, after /disconnect the data will live in memory until the next
/connect.
Also, remove some copy-paste in connection loss path.
|
|/
|
|
|
|
|
|
|
| |
If Profanity is disconnected in any way before ping response is
received, the autoping timer will expire after the next connection
is established. As result, user will be disconnected immediately.
Cancel autoping timer in ev_disconnect_cleanup(), so it is done
for all kind of disconnections.
|
|
|
|
|
|
|
|
|
|
|
|
| |
When connection is lost, profanity tries to disconnect what leads
to an infinite loop. The loop occurs, because connection_disconnet()
runs xmpp_run_once() separately and waits for XMPP_CONN_DISCONNECT
event. But it doesn't happen, because the connection object is
disconnected.
As solution, don't disconnect after XMPP_CONN_DISCONNECT is received.
Also, don't free libstrophe objects while the event loops executes,
because the event loop continues using objects after callbacks quit.
|
|
|
|
| |
https://github.com/profanity-im/profanity/issues/1085
|
|
|
|
| |
Regards https://github.com/profanity-im/profanity/issues/1085
|
|
|
|
| |
Fixes https://github.com/boothj5/profanity/issues/1079
|
|
|
|
|
|
|
|
|
|
| |
Presence of contact not found in roster are filtered out.
But sometimes roster is received after a first few presences.
We choose to store presences until we receive roster and then process
this presences.
Fixes #1050
|
|
|
|
|
|
|
| |
When auto joining a MUC we don't have access to required information so
we just don't start OMEMO at this time.
Once we receive disco info we then try to start OMEMO.
|
|
|
|
|
|
| |
Store MUC anonymous type in mucwin for that purpose.
Fixes #1065
|
|
|
|
|
|
|
|
| |
When connecting for the first time or when creating a new account don't
use only 'profanity' as default resource.
Some server don't support having 2 connection with same resource. Using
profanity as default lead to deconnections.
|
| |
|
|
|
|
| |
Add sv_ev_connection_features_received for that purpose
|
|
|
|
|
|
|
|
|
| |
Reflected messages can't be filtered by nick only otherwise you might
ignore messages comming from you on another devices.
Consequently we maintain a list of sent messages id in mucwin.
To be sure the id will be correctly reflected we use the origin-id
stanza.
|
|
|
|
| |
devicelist handler should be kept after trigger
|
| |
|
|
|
|
|
| |
We try to reconfigure node and publish again.
If it fails again then we give up.
|